Hacker News new | past | comments | ask | show | jobs | submit login
How Elon Musk and Y Combinator Plan to Stop Computers from Taking Over (medium.com/backchannel)
344 points by sergeant3 on Dec 11, 2015 | hide | past | favorite | 171 comments



The near term danger of AI isn't in hyperintelligent SkyNet like systems, it's in human controlled autonomous weapon systems and "stupid AI"

What you should be fearing is military drones being given the ability to make decisions on targets or to fire, even with human assistance, and these systems won't just reside in the hands of large governments either.

Already police and militaries around the world are using abstracted forms of force, wherein target are identified with algorithms, and then force trained on those targets

What's do you think is going to happen first, SkyNet, or a predator imaging drone telling a human operator falsely that the current image is a terrorist?

What's going to happen first? SkyNet, or self driving cars putting millions of people out of jobs because of a lack of demand for drivers in transportation, or manufacture of cars? (I'm not saying it's a bad thing, but it will be very very disruptive)

If SkyNet is a threat, it's 50 or 100 years off I think. "AI" as it is now, is no where near the capability people are talking about. It's sheer hyperbole.


    > a predator imaging drone telling a human
    > operator falsely that the current image
    > is a terrorist?
I think you're missing how this can be a really good thing. Right now this is happening but the "algorithm" is some unpredictable twenty year old in uniform.

Are we going to have hearings auditing what the algorithm is like? is there going to be a scandal because isBrownPerson() is discovered in there somewhere? Is someone going to run it against a video collection of regular daily life and discover the drones would be indiscriminately bombing innocent situations if given the chance?

The end result is that we're going to have to effectively have to come up with rules of engagement for every scenario. I think this will lead to war being fought more justly and with less collateral damage than ever before.


The fear is that instead of an algorithm and a human double checking each other and refining each other that they will simply add. So we end up with the worst decisions of both human and computer.


I hadn't considered that angle before, there needs to be a lot of oversight (in general when we're killing people, not just computers) but I think you're right.


It's adorable how you think it will improve how war is fought. These autonomous systems will always have a manual override and I will never believe that the review process will be given proper oversight. The armed forces will always err on the side of destruction in any potentially vaguely violent scenario. Fear and War is too prosperous to be peaceful.


If this is true, one wonders why they waste all that time writing rules of engagement, and prosecuting those who don't follow them.


We generally only prosecute those who break those "rules of engagement" when they are not ourselves.


This cannot ever be a good thing; drones justify terrorism because it's the only available response to fighters from the "other side".

Thinking we can wage wars without risk to ourselves or our "soldiers" is preposterous.

"Just" wars are a ridiculous idea rooted in superstition and hubris.


    > drones justify terrorism because it's the only available
    > response to fighters from the "other side".
We had similar terrorist attacks before we had drones, read what the likes of Bin Laden said about why 9/11 happened. It's because the infidels are on their homeland, not because they're using some specific technology to attack them.

More generally people resort to terrorism in the face of an overwhelming enemy. The west could relatively easily win the wars they're fighting now in the middle-east with their WWI armies if they were willing to care less about civilian casualties, that's how lopsided the odds are.

    > Thinking we can wage wars without risk to ourselves or our
    > "soldiers" is preposterous.
Drones as a technology would happen regardless of whether or not the armies developing them were concerned about their own soldiers. They're simply the logical next step in technology, e.g. fighter jet development has stalled to some extent not because the technology has reached its limits, but because if you pull more G's than you do now the pilots might die.


1. The current algorithm might be an unpredictable twenty your old in uniform, but there is accountability for their actions: it is somebody's actions. Who is responsible for the illegal actions of AI? You start to have to untangle racially motivated assault against "it shouldn't have done that".

2. People will not come up with "responsible" rules of engagement as you seem to suggest. isBrownPerson() will be coded into the algorithm but not in a discernible way. Sure, you could verify this against a video collection: like we all know the statistics of profiling and incarceration rates for different races.


I wonder whether "a human in the loop" is some kind of illusion; for longer than we have had recorded history, we have killed our neighbours because an authority have told us to. In the latter half of the 20th century we nearly perfected the conditioning needed to make soldiers shoot to kill when told to. I wonder whether it will make a difference if the order is given by a human a or an AI. And if the soldier is going to shoot to kill regardless, why not just have an AI do that as well.


Yeah that is an interesting paradox. It feels wrong for an algorithm to decide whether to kill a human, but what is so much better about that decision being made by a human?

If that seems extreme, consider the self-driving car problem. 30,000 people in the US alone are killed per year in car accidents involving human drivers. What is the acceptable number of motor vehicle related deaths per year when all cars are self-driving?


>Yeah that is an interesting paradox. It feels wrong for an algorithm to decide whether to kill a human, but what is so much better about that decision being made by a human?

Obviously because those humans then share and bear the moral dilemma of doing it or not. Which also affects the outcome, as opposed to just being some inconsequential detail. Humans can demand out of an injust war (as they've done time and again). Killing machines can not.

It's the difference between blindly following orders (which not even all Nazi's did) and being able to have a change of heart.

It's also the sheer (number of magnitudes) more killing efficiency of the killing algorithms, when a machine could kill 10000 people in the time it takes for a single human to make a decision about 1 of them -- until we find there's a != when we needed == in the algorithm.


as another commenter noted there are currently physical limitations that would not allow e.g. drones to kill 10000 people in an instant. I don't think for example nukes will ever be under the control of AI algorithms ever...


    > 30,000 people in the US alone are killed per year in car
    > accidents involving human drivers. What is the acceptable number
    > of motor vehicle related deaths per year when all cars are
    > self-driving?
The right answer to that question is "29,999 and getting better", but unfortunately luddites abound even in modern hi-tech society.


Well, we'll still have Hitlers (or benevolent Trumans deciding to drop on Hiroshima etc). Those would be the ones building, enabling and deploying the combat AI.

What we won't have would be Vasili Arkhipovs, Oskar Schindlers and Chris Taylors...


> In the latter half of the 20th century

This is an odd statement. In my reading of historical battles, I've never seen a mention of the idea that soldiers before 1950 did not shoot to kill when ordered to.


I've seen this figure before, and read that they changed firing range practices after the war to make killing more automatic. This was in Gwynne Dyer's "War", and he's normally reliable.

However, the original statistic is apparently from a not so credible source, a study by SLA Marshall in WWII that's not so well regarded. This thread goes into some depth:

http://msgboard.snopes.com/cgi-bin/ultimatebb.cgi?/ubb/get_t...

Not sure what the truth of it is.


Another more realistic potential threat is police relying too much on automated crime prediction. They are already embracing it:

http://www.firstcoastnews.com/story/news/crime/2015/10/19/po...

An old thread on the topic:

https://news.ycombinator.com/item?id=4185684


> relying too much on automated crime prediction

Including many variations of that same problem. "Sorry, you don't qualify for ____ because our 'clever' assessment algorithm doesn't account for your particular situation." Humans with decision making power correct these kind of problems all the time.

Computers with clever algorithms can be incredibly useful, but they are still just tools. Unfortunately, given how bad we humans are with statistics, it's easy to overestimate[1] their accuracy.

In addition to the raw accuracy problems, there is a risk of prejudices being baked into algorithms. We already see this with "redlining" and other housing practices, where loans availability ends up being coded racism. With the added complexity of machine learning and other prediction and analysis methods, it is probably a lot easier to hide improper discrimination.

[1] reason #57 why statistics should be a mandatory HS math class, right after algebra


It's a hairy problem. Not understanding statistics goes both ways - and from what I've seen, it usually leads to overestimating the value of human input. People are quick to distrust or reject machine's judgment when it disagrees with their biases and generally, it's much easier to debias a machine (or code it properly in the first place) than to debias humans.

While I agree that humans may enshrine their prejudices in code which will later will turn hard to adjust, again what I fear more is a reverse scenario. A machine can be perfectly fair and people won't like it, because it fails to apply the biases they want. People often apply counter-discrimination to compensate for what they believe is unfair treatment.

A mandatory class in statistics and applied probability theory could maybe help the next generation to accept that what is fair may not look so at a first glance.


> A mandatory class in statistics and applied probability theory could maybe help the next generation to accept that what is fair may not look so at a first glance.

To be honest, I don't think this will help much since most people won't interested in it and will forget what they learn as soon as the class is over. Also, being an expert in statistics can help you better understand reality, or it can help you present data in a way that confirms your biases (even unintentionally). It's sort of like sophistry, except not quite to that degree or moral ambivalence.

edit: If you want a good example of this, see Scott Alexander show how parapsychology can actually come up with some pretty good statistics while appearing to check off all the right boxes with regard to experimental design: http://slatestarcodex.com/2014/04/28/the-control-group-is-ou...


There was a good piece about this on the radio today.

http://www.kcrw.com/news-culture/shows/to-the-point/can-big-...


Luckily there are physical limitations. There are practically two ways to arm a drone. Bombs and guns.

A small flying drone operating an 9mm pistol has extremely poor accuracy. Especially with multiple shots. If you increase the accuracy, you have to increase the mass. Which makes it bigger target. Currently regular guy with little training and a rifle can probably shoot down a gun wielding drone most of the time before the opposite can happen.

https://www.youtube.com/watch?v=xqHrTtvFFIs

Then there are drones with bombs attached to them. One of the most sophisticated is called Hellfire missile. But it's needlessly big and expensive. The quad copter in next video costs 100$ and has payload to carry regular hand grenade.

https://www.youtube.com/watch?v=Lb2Tpp3CIoY

You could go RC car route. But you again need a bit of size to conquer stairs. Also it's easier to track where it comes & goes.

https://www.youtube.com/watch?v=kUXRMDK3r7s

I'm less worried about government doing bad shit and more worried about private citizens getting nasty. Autonomous flight, GPS navigation, address from google maps and bit of facial recognition. First civilian drone murder is just matter of time. People kill with more ease the further they can be when it's done.

I'm guessing this internet privacy thing will seem like child's play when all of sudden everybody wants to hide their physical address. Once that is handled, it's only bit like having rabid dogs with homing device.


"Luckily there are physical limitations.... I'm less worried about government doing bad shit and more worried about private citizens getting nasty."

The MSF Trauma Hospital in Kunduz, Afghanistan (3rd October) didn't have this luxury and in the fog of war, [0] 12 MSF medical staff and ten patients were killed. [1],[2]

Killing by remote sensing, is indiscriminate. Adding AI is another order of stupidity.

References:

[0] http://arstechnica.com/information-technology/2015/11/how-te...

[1] http://www.msf.org.uk/article/in-memoriam-msf-colleagues-kil...

[2] http://www.theguardian.com/us-news/2015/oct/06/doctors-witho...


Not sure that it matters for the sake of argument, but the weapon system used in question was a human-piloted, human-manned AC-130 gunship, not a drone / remote platform


"the weapon system used in question was a human-piloted, human-manned AC-130 gunship"

True, read the Ars article: "The BMC crew is responsible for steering the aircraft to targets, identifying them, and shooting them; the aircraft's battery is slaved to the sensor suite for targeting." - Aircrew, well trained, professional. As ethical as you can get.

Now what happened is "A US special operations team on the ground, given coordinates of the Afghani NDS building by the Afghan forces they were working with, passed them to the AC-130. But when the AC-130 crew punched the grid coordinates into their targeting system, it aimed at an open field 300 meters away from the actual target. Working from a rough description of the building provided from the ground, the sensor operators found a building close to the field that they believed was the target. Tragically, it was actually the hospital."

Will adding AI to this situation make any improvement when the real issue is ' lack of a "common operating picture,"'?


No, you're right. The only thing that may have happened was lack of connectivity to COP and no person in the loop to authorize, resulting in a stand-down of operation and no ordnance on target. Great if it's a hospital, not so great if it's actually a bunch of enemy that you need to remove to save your soldiers on the ground. Tough, tough thing to get right all the time. I think a measured, hybrid approach will probably get you the best outcome but I'm no expert in this field.



Total aside, but the United States should really promote some sort of beacon / FLIR-readable indicator for hospitals. Their existence would have to be kept a quiet so there weren't a bunch of false positives floating around, but I would think the cost is minimal for the returned value.


AI can probably actually improve this particular incident. Having a "rocket" that doesn't explode but drops an AI controlled drone that kills (or even just disables) exactly who is targeted seems like it would be far superior from both the military perspective and the moral perspective.

This sort of thing: https://youtu.be/DTqa-NEwUbs?t=94

It could even drop "phone home" over radio, allow the operators to point out the targets at that point, and then fire.


We've been hearing about "smart weapons" since the early 90s and they don't seem to be getting any more intelligent as time has gone on.


Governments already have their AI killer robots. It's called "military personnel" and "policemen". The increase of government lethality towards regular people should be somewhat minimal compared to current tech. This has to be controlled politically, hence democracy.

But governments could greatly increase their capability for surveillance.


    > Adding AI is another order
    > of stupidity.
That is predicated on the assumption that AI would make worse decisions than humans, which I'm really not sure is a given.


In Prof. Tegmark’s presentation at the UN he mentioned the possibility of extremely cheap drones that approach the victim's face very quickly and pierce a bolt into the victim's brain through one of their eyes. That wouldn't require a high precision projectile so it will be easy and cheap to build.


Are you suggesting there is something about drones that makes them fundamentally inaccurate? It would seem to be that advancements in computer vision will solve that problem in short order. Particularly with the VR industry introducing strong incentives for marginal improvements on image-based tracking.


Newton's third law; Hence the comment about drone size/weight and mentioning the small caliber weapon it would have to use.


It could expand out a big frill along a plane perpendicular to the shot when firing to transfer the kickback momentum to the air.


We already drop quarter of a million dollar bombs to kill single targets. Having a drone which turns into a disposable rocket to hit a single target is hardly unfeasible.


They are still going to optimize everything, something that can fire X number of shots instead of being a one-off rocket will stretch their budget farther, etc.


Right, but that wasn't the point - the point was that thusfar, the appeals to mechanical limitations of drones as weapons of war have been based on very limited thinking.


Umm, it doesn't have to have good aim if it can get close enough, eg. a drone small enough to crawl into a person's ear and then fire/detonate.


How big of an explosion do you think a drone small enough to fly through your ear canal is really going to be? How does it even reach its target with such a minuscule mass? The target exhaling would blow it away (and likely destroy it)


obviously enough to blow your brains out once it's in your ear


Obviously it would require enough to blow out your brains to be effect. How do you propose to create a drone so tiny and light to also carry the payload required to do that? Just because the payload itself could fit into someones ear doesn't mean the delivery mechanism can, well, actually deliver it


Miniaturization is expensive. And you need some way to get to the victim that doesn't take years. This increases the cost too. Most people would be just "too cheap to kill".


> If you increase the accuracy, you have to increase the mass. Which makes it bigger target.

Boeing is developing compact portable laser weapons technology that could conceivably be mounted on drones.

http://www.livescience.com/52121-laser-weapon-destroys-test-...


The laser itself might be compact enough. But---where do you get the energy from?


I'd like to point out, that 2 kilowatts is the power AK47 can give as output of kinetic energy when fired at full auto.

If you look past the hype, lasers are far from ready.


You'd probably really like the book The Future of Violence by Benjamin Wittes and Gabriella Blum


I think the actual threat from AI is far more pedestrian than "Skynet" or killer robots. The real threat comes NOT from AI itself but from people who will be able to afford to exploit AI for profit at the expense of increasingly large numbers of people who just become redundant. The rich will get fabulously rich while everyone else just becomes marginalized into serfs.

Another scenario that is less likely but still more likely than killer robots would be AI that simply loses interest in human affairs, stops interacting with us, and does its own thing-- like depicted in the movie "Her" !


The other problem is if AI can be programmed to commit physical crimes then only a select priesthood of vetted developers will be allowed to program them at the service of a concentrated elite.


Bravo to point out the obvious.

Politics, moral only exist because there a large quantity of human.


All AI researchers I spoke to laughed at this "SkyNet" like fantasies. According to them the reason why some people peddle these sort of horror stories is because they seek to get more funding by exaggerating the achievements of their field.

Snowden like episodes have shown us that we should not trust our government. The government does blatantly illegal things, lies when confronted and in many cases kills or jails our own citizens just because they are exposing the government. I am more scared of government using this AI to oppress its own people.

Imagine this, a person who is 18 years today becomes a potential presidential candidate or major activist in future. Some clerk at DC would simply run a computer program that will bring up the fact that this guy sent a sext to his 17 year old girlfriend in past, smoked weed when 19, bought beer when he was 20 etc. Such government would them implicate him in various court cases and jail him or even worse simply blackmail him to shut up.


Let's not overblow the threat. W was known to have done cocaine in his youth, and he was still elected, and I would expect statute of limitations to apply.


I am not over-blowing the threat. We would not know if government simply blackmailed people using such evidence. Many times you don't have to convict a person for the crime merely spreading a credible lie in media is enough. Obama has already used IRS to harass conservatives and we know what watergate was all about.


W was the supported choice of the power elite. It would go different Kshama Sawant was polling high numbers and such photos came out.


"If SkyNet is a threat, it's 50 or 100 years off I think."

Well we work on global warming even though the worst effects are likely decades off. I think one computer science professor put it something like this: "If experts told us aliens were likely arriving on Earth in 50 or 100 years, it'd make sense to start preparing now." You don't want to wait until it's too late to do whatever needs to be done.


>predator imaging drone telling a human operator falsely that the current image is a terrorist

Though that stuff isn't nice it's probably an improvement on previous technologies - looking out of the bomber and thinking that looks like some good buildings to bomb and similar.


Yeah, or looking at the monitor with the drone camera feed and thinking "hmmm, looks terrorist to me".


Well with AI they could do face recognition against their Twitter account and see if they'd posted any jihadi stuff recently


Not to mention what is happening with the NSA, which is essentially judicial-hearing-by-algorithm.


I think self driving cars and carts will also kill almost all middlemen. Why go shopping for food if you can get everything shipped to your fridge for 20-30% less, direct from the producer. Google will probably also offer to drive you for free to a medical specialist and start doing tests after you entered the car. So we probably need a lot less physicians too.


I'd argue that we'd probably need a lot more. Diagnoses would be more accurate so people would be inclined to use the health system more. Health work would be cheaper because the expensive doctor component would be used less. Time to go to the doctor could be more efficient (still working through the car ride) so people are even more likely to go. Overall that could likely mean higher demand for healthcare, including the human component.

Your food shopping point still needs middlemen. You have "chain of trust" issues with the producers. How do you know that you'll get good oranges from the orange grove. Who capitalizes the distribution system and designs the business model.

Considering historical examples, we'll see more use for people in the economic problems that are still machine hard. Machine easy economic problems will converge to resource costs + any premium for monopoly. This will free up more demand for other products. People still want to buy an easier, better life.


There are often 3+ intermediaries between many farm products and the consumer. Dropping that to 1 chould be a huge change. Cow, wholesale, bottler, distributer, store, you. Granted, there is a lot of variety in this as some farms do there own botteling etc, but some products pass through even more hoops.


There is also the pasteurizing step to consider, and plenty of countries have 'milk boards' (cartels by any other name).


So you think that when we finally have smarter than human AI, it's just going to make completely objective decisions?

First off, we'll never let such an AI to be completely objective, because, well, the paperclip scenario. And second, the AI will be about as "objective" as Google search is - in other words, not really. At the end of the day it's still humans that decide the algorithms for Google search, and it will be humans that decide the algorithms for the "smarter than human" AI.


Even without the hardware, I'm a bit surprised we haven't seen more AI-based approaches to finding and exploiting security vulnerabilities. I guess it's still easy enough that they don't need fancier tools; a well-designed fuzzer can do a lot.


Although I hear about machine learning-based approaches to fuzzers more and more lately. So I think you will soon!


Stupid AI in healthcare is a drastically greater threat than human controlled autonomous weapons will ever be, unless you're talking about WMDs.

It's not a close risk comparison. The world is far less dangerous today than it has ever been, that includes both terrorism and war. AI and autonomous weapons systems are not going to suddenly make the developed countries that deploy those weapons want to slaughter each other.

How many people die from medical mistakes per year in the US? More than died in all wars and all acts of terrorism combined globally in 2015. The AI that is no doubt going to show up throughout healthcare over the coming 30 or 40 years, will probably improve on the rate of human mistakes - while it still kills tens of thousands per year through mistakes in just the US.


Not that many people die from medicla mistakes. The hysterical oft quoted paper blamed every suboptimal outcome from a "medical mistake", ignoring the contribution of whatever cause put them in medical care in the first place.


with regards to spotting and identification I can see a system like what was shown in the last Dredd to come about, drones in the air and cameras feeding a large centralized room staffed by people directing others.

While the military can get away with remote action it won't happen with police in Democratic countries except possibly with non lethal force. I could see remote shut down of cars, even a flash bang or sonic system.

The real danger from AI isn't it taking over, it is from too many people checking out and not participating in life anymore


I wonder what their use cases are. "Advance the state of the art in AI" is just too nebulous. Having the smartest people in the world isn't enough... you need to focus them on some goal.

Once you get beyond 8 researchers, you'll have problems with politics and egos if people aren't focused on a problem. Everyone will have their pet approach for specific problems, and they won't compose into something generally useful. AI is really like 10 or 20 different subfields (image understanding, language understanding, motion planning, etc.)

I think self-driving cars are a perfect example of a great problem for AI (and something that many organizations are interested in: Google, Tesla, Apple). Solving that problem will surely advance the state of art in the AI (and already has).

tl;dr "OpenAI" is too nebulous.


Before you invite regulation into this area, take a long hard look at how the government has historically approached cryptography and IT security. This is a far simpler domain with far simpler concepts, and it's a total shitshow that alternates between ignorant security theater and self-serving power grabs.

Get into bed with the government and they will piss in it. The most likely outcome is costly complicated regulations that hobble legitimate development and accomplish nothing in terms of making us safer from anything. The end result will be like ITAR and crypto export controls: pushing development off shore and making the USA less competitive.

I say this not as a hard-line anti-government right-winger or dogmatic libertarian, but as someone who has a realistic view of government competence in highly technical domains. Look at other areas and you don't see much better. Corn ethanol, for example, is literally the pessimum choice for biofuels-- it is technically the worst possible candidate that could have been chosen to push. The sorts of folks who ascend to political power simply lack any expertise in these areas, and so they fall back on listening to the agenda-driven lobbyists that swarm around them like flies. The results are awful. Government should do government but should stay the hell away from specific micromanagement of anything technical.


Yes, and another issue with regulation is that it gives a speed advantage to any team that's willing to subvert the regulations: http://www.brookings.edu/blogs/techtank/posts/2015/04/14-und...

If regulations do turn out to be the right path, I'd suggest that people within the AI field form their own informal regulatory body first, fortifying it against institutional failure modes like corruption by lobbyists etc. Then get the government to grant them legal authority. Hopefully that would go a ways towards addressing the issues you describe.


Sounds to me like YCombinator wants to fuel their growth by creating these tools for their companies to use. The YC business model is absolutely brilliant, but I can't see this being some purely altruistic mission. Or if it is then I am jealous that they have the power and resources to put up against such a project. I'd spend all my time trying to build an AI if I didn't have to work! (Though I am trying to steer the company I work for in that direction anyway...)


Well, yes, but in the broadest possible sense, as in the rising water level will raise all boats, including of course the YC ones, since all the work coming out of this group will apparently be open sourced and freely available for anyone to use, YC company or otherwise.

From the article:

> Sam, since OpenAI will initially be in the YC office, will your startups have access to the OpenAI work?

> Altman: If OpenAI develops really great technology and anyone can use it for free, that will benefit any technology company. But no more so than that.


That's just enough for the YC social network to incorporate OpenAI researchers, and Sam knows it. YC folks will be at the water coolers and ping pong tables with OpenAI folks, and once the OpenAI folks move into another office they'll still be in frequent communication with their YC social contacts, if not through official channels then unofficially as friends. That means they'll hear the researchers' in-progress ideas and details on how the platform works or will work long before that information spreads to the general public. I'd consider that a huge advantage, assuming OpenAI makes sufficient progress.


Well you're probably right, but I don't think that you can do too much, realistically, to regulate early access to very fresh ideas through networks like that - and I think that's fine so long as there's no purposeful or unnecessary delay in spreading the information more widely.

You could for instance make the same kind of point about YC companies' access to investor networks, advice of partners, all those sorts of things which aren't explicitly reserved just for them but of course are more readily accessible by virtue of being in the program. It's not something that's inherently bad, it's just how it works.

I'm not saying that having very close contact with this research group won't be advantageous to YC companies, of course it will, but with that as a given, the ethos of this group's findings being made open and freely available for anyone to use is well-intentioned and to be applauded, when it's a privately funded initiative that could just as easily be justified in being somewhat or completely closed and proprietary. Is it really important if some YC companies happen to have a slight advantage from this, as an inevitable side effect, in the big picture?


>Well you're probably right, but I don't think that you can do too much, realistically, to regulate early access to very fresh ideas through networks like that - and I think that's fine so long as there's no purposeful or unnecessary delay in spreading the information more widely.

Keep the team distributed across the world and make all communication surrounding the projects open as well. If it's for the world it should be by the world.

>You could for instance make the same kind of point about YC companies' access to investor networks, advice of partners, all those sorts of things which aren't explicitly reserved just for them but of course are more readily accessible by virtue of being in the program. It's not something that's inherently bad, it's just how it works.

I wouldn't argue that, that's just how business works. I would argue that the founders are playing OpenAI up as humanitarian aid when really it disproportionately benefits them (autonomous cars, paid for by research grants? Investment in early adopters of the technology? Uh yeah).

>I'm not saying that having very close contact with this research group won't be advantageous to YC companies, of course it will, but with that as a given, the ethos of this group's findings being made open and freely available for anyone to use is well-intentioned and to be applauded, when it's a privately funded initiative that could just as easily be justified in being somewhat or completely closed and proprietary. Is it really important if some YC companies happen to have a slight advantage from this, as an inevitable side effect, in the big picture?

YC's business is growing businesses, and they'll take any advantage they can get. If it benefits them more at all then it's not charity or non-profit, and they shouldn't be billing it as such.


Whether it's the "singularity" or just software naturally improving over time and taking on more "thinking" work, there's going to be a huge and insurmountable unemployment problem in the near future. The market values human thought/labor to the extent that it's cheaper or more effective than an automated solution. When that isn't the case, you can fill in the blanks. That, to me, is the scary part of AI.

It doesn't sound like this project has any scope to address this practical concern, which to me, is largely economic. I don't see how universal access to AI puts food on the table.


There are many dystopic future possibilities as automation eats jobs.

There's also a few positive ones, and I hope we can move towards them. One way would be to shift from taxation of human labour to taxation of the means of production. Another way is if access to quality of life products becomes so cheap that they require very little labour to earn.

If you extrapolate the progress of solar power, 3D printing, and synthetic meat, you can imagine a machine that is cheap to produce, but which would make each human completely self-sustainable. Not needing to work to put food on the table every single goddamn day would transform our society quite a lot in a positive way.


I understand your angle, but relying on 3D printing and synthetic meat, probably from petroleum derivatives because 3D printing can't happen from thin air, in order to feed humans ... dude, to me that doesn't sound like a life that's worth living.


Thin air contains all the carbon that plants get their bulk mass from...

Extrapolate further, imagine a machine that runs on solar power, and creates whatever food you want from water, carbon dioxide, and human poop. Essentially short-circuit the whole raise-crops-feed-cattle-slaughter-get-meat cycle. Make the food out of the machine perfectly nutricious as well, because why not.

There would still be things to strive for, to work for, if you want. But baseline survival is just taken care of. Sounds like a good future to me.


In combination with sun light, the carbon taken from air provides the energy that plants need to grow, however plants also need minerals from a healthy soil.

When it comes to food, baseline survival is already taken care of in western countries and we are wasting about one third of the food we produce. It's not food that's the problem, but living space and forever rising health care costs.

But you know what the irony is? We don't know a thing about what constitutes a nutritious healthy diet, as the reductionist science we've been applying is not up for this task. Even more aggravating is that trying to shorten the "raise-crops-feed-cattle-slaughter-get-meat" cycle and do it on an industrial scale (by means of replacing sun's energy with fossil fuels and do it in concentrated operations) is precisely the root cause of many of the problems we find ourselves into.

As meddling with the things we ingest has given us the modern day diseases such as cancer, diabetes, obesity and heart disease, not to mention that we're on the brink of going back to the dark ages due to the upcoming "antibiotics apocalypse".

And yet here you are, hoping that some future 3D printer will synthesize meat out of thin air, instead of fixing the real problems in our society, which is that we consume and waste too much from processes that aren't sustainable. But yeah, 3D printing will save us, seemed to work for Star Trek characters at least. Good luck with that mate.


This is why I hope the experiment in Finland works out such that it is feasible to expand minimal guaranteed income to everyone. In our current state it is essentially fearmongering. The singularity will happen just like flying cars happened, but eventually there will be major social changes around more automation and less necessity for humans to work.


How about the experiment in Greece? Oh yea, I forgot, we're never supposed to mention that.


> It doesn't sound like this project has any scope to address this practical concern, which to me, is largely economic. I don't see how universal access to AI puts food on the table.

Benevolent AI dictator that runs farm machinery and food distribution networks?

I only half joke. At some point, we're going to need to ditch the puritanical bullshit that work is required, and realize that GDP as a metric is hogwash. Quality of life, happiness, those are what need to be measured and delivered on.


The problem with the benevolent AI idea is that it's not in the interest of anyone with power/money/production to create it.

That's why I agree with your second point of rethinking "work". Instituting a Universal Basic Income lets us keep capitalism while putting more power in the hands of consumers and not relying on the kindness of AI/strangers.

Otherwise, we'll continue to see an increase in wealth disparity until there's no longer any function of the market.


The beauty of software is that it has a marginal cost of distribution of 0. All it takes is one altruistic person to make a piece of software that solves a problem, and then all of humanity can have that for free, forever.

However, looking at the current trends, it's clear that the owners of the hardware are going to be the gatekeepers and middlemen. If a benevolent AI that's working for you requires you to run it off of AWS/Google/Azure, power will be concentrated to them, and they will always be able to run a more powerful AI since they could utilize their entire hardware capabilities.


Cloud providers are so much smaller than the aggregate consumer computing devices out in the wild.


Agreed. But we're going to have to fight hard for a UBI.


It's not meant to do so.

The threat of superintelligent computing is a serious risk to humankind, and this threat magnifies if there are few AIs and those few are only accessible by the rich and/or powerful.

I imagine that Musk would rather live in a world in which superintelligent AI never comes into existence, but since he has no power to stop that future, this seems like the next best alternative.


Jobs are being lost to automation--does that mean we can theorize that increasing total automation will result in decreasing total employment?

Probably not, because at large scales there is a positive correlation between automation and employment. That is, the nations with the most automation are also the nations that have developed the best employment. The U.S., for example, has much better employment than it did 100 years ago, and better than China today. China would love to have the economy of the U.S., automation and all.


That argument is like saying at the introduction of automobiles that cities with the most cars also have the most horses.


No, because automation has been replacing jobs for 150 years now, and the positive correlation with job creation could not be more clear.


You sound like the people protesting the industrial revolution. Automation of all jobs should be the end goal.


Once again sensationalism. I watched that interview. Sam's take on AI is perhaps the most practical I've seen in popular media, while everyone is freaking about a singularity event.


"Sam's take on AI" being that all machine learning research should be intensively monitored and controlled by the government, or has he backed away from that position?

http://blog.samaltman.com/machine-intelligence-part-2


This...sounds incredibly naive? They seem to think that AI risk comes from Bad People doing AI? There's not one mention given to the possibility of well-intentioned people destroying the world by accident.


I think the best defense against the misuse of nuclear weapons is to empower as many people as possible to have nuclear weapons. If everyone has nuclear weapons powers, then there’s not any one person or a small set of individuals who can have nuclear weapons superpower.

Yeah, right.


This initiative makes no sense. If AI could really become a technology which would endanger our civilization, when you clearly would not(!) want it to be useable for everyone. There is a reason why it is not the right of every american citizen to own plutonium, buy sarin or send letters full of anthrax around.

But if AI would be as dangerous to society as for example cars, then we don't need such an initiative. So for me the whole thing seems to be a marketing stunt of sleep-deprived billionaires who read the wrong books.


Yes, I would like to see the logic of Sam & Elon's discussions that caused them to opt for this route. Let's have them post it on the internet and give everyone an opportunity to shoot holes in it. (I hope I'd have the guts to do this if I was a billionaire.)


Musk is spread so so thin. Not enough to run a rocket company, a car company and be the Chairman of Solar City. Needs to have his hand in even more pots.


I remember seeing him give some interviews a few months back where he looked quite tired and overworked. I hope he has the wisdom to take a break when he needs one.


Agree but I don't know that he will (since the way it seems to me) behavior like this is similar to an addiction in some ways. You get so much positive feedback as well as ups and downs he would have to hit a tipping point (or life changing event) to actually change the behavior.


Next up is a biotech company.


> Altman: Our recruiting is going pretty well so far. One thing that really appeals to researchers is freedom and openness and the ability to share what they’re working on, which at any of the industrial labs you don’t have to the same degree.

Do current AAPL/GOOG/FB engineers dislike this so much? There's secrecy within most for-profit entities, what makes AI so different?


> There's secrecy within most for-profit entities, what makes AI so different?

It's not about AI vs other fields, it's about research vs. engineering. Admittedly, that line is more blurry than most, especially in highly technologically competitive fields like AI and graphics (and less blurry in fields where the research-to-implementation gap is larger, like PL or algorithmic complexity).


Suppose you're a researcher doing something cool at Microsoft. If some OSS project sees your academic paper "Hey, that's awesome, I want to try this out in my new project!" you have to say "Sorry, Microsoft owns patents on everything I did, you're gonna have to pay a few million dollars for a license."


I don't have any insider information here but I guess it's because AI researchers want to be published?


There is a role for regulation and collective choice. Take HFT as an example. If everyone competes for lower latency access, eventually everyone locates their bot at the same exchange. If everyone puts a lower limit on latency (e.g. by coiling fiber optic cable), the latency playing field is leveled and new areas of competition emerge.

Open technology will empower the expression of many human wills, individual and collective. Human wills are today constrained and empowered by many human-imagined systems of thought, and we can invent new ones. Will there be an AI which explores the possibility space of constraints on AI-using humans?


My real concern about AI is that generalised Moore’s law means we only have a relatively short time to plan. Assuming that computers continue to double in processing power every 18 months or so, then we only have 10 years where we go from 1% human level to human level. This really is not a long time to make good decisions.


A few things about AI that the singularity crowd either doesn't get or doesn't want to get:

1) Growth rates in nature are never exponential. They are sigmoidal. Sigmoids look very exponential when you're in the middle of them, but we are starting to see the level of moore's law (yeah yeah, it's technology not ICs, still sigmoidal)

2) Even if we had a computer that was 1,000 times faster than the ones we have today and used 1/1,000 of the computing power we STILL don't have the algorithms to produce a human intelligence and that is one hell of an algorithm.

3) The focus for a long time has been moore's law and the associated increase in FLOPS. I think what is more important and more limiting is the bandwidth and bandwidth is a couple orders of magnitude lower than FLOPS where FLOP=Byte and an extra order of magnitude lower when FLOP=Word.


Growth rates in nature are never exponential. They are sigmoidal. - For whatever his other flaws, Kurweil deals with this by pointing out that Moore's law has effectively been a series of sigmoidal curves - but these series constitutes a trend.

Even if we had a computer that was 1,000 times faster than the ones we have today and used 1/1,000 of the computing power we STILL don't have the algorithms to produce a human intelligence and that is one hell of an algorithm.

- One has to put such an algorithm as in the realm of "unknown unknowns". Anyone who says they know for sure that a human-capability-equivalent algorithm is complex would have to know a least a lot about such an algorithm. No one can yet make that claim. So the non-existence of such an algorithm isn't a certainty but just a trend.

And the "unknown unknown" things might a long time out, might be just around the corner or might be utterly impossible.


1. I don’t think we can use examples from nature to make any predictions about the growth rate of computing power.

2. I didn’t say that we are close to human level intelligence, just that if generalised Moore’s law holds then we will not get much warning - we will move the 1% level to human level in 10 years. How long it will take to get to the 1% level is currently unknown.

3. I was talking about generalised Moore’s law (a doubling of processing power ever 18 month), not the mechanism of how we do this. Just increasing transistors counts does not have much future (it appears), but there are many other ways of increasing computational power that I am sure will be used over the coming decades.


> if generalize Moore's law holds then we will move the 1% level to human level in 10 years

It's quite likely that human intelligence is actually highly optimized to solve specific classes of problems, such that increasing performance in one respect decreases performance in another.

If you want to see this in play, look at human variability. There are plenty of examples of humans with extreme intelligence, but those people don't dominate other humans in every regard. They are brilliant on certain subjects while other subjects don't even register. The most brilliant physicist in the world might say the wrong thing to the wrong person and get shot. And software has no intrinsic way to integrate the two systems that might be able to perform well at those two tasks respectively. The very computational structures which lead to physics breakthroughs might be involved in the social mistake.

The integration of conflicting models is solved by human beings through society. We empathize, coordinate, fight, and kill until some consensus emerges. People seem to assume that AIs will just be able to automatically integrate their knowledge with one another, but that doesn't make sense. If that's true you haven't gotten to the "conflicting models" scenario yet.

The whole notion of "general" intelligence is highly suspect. Intelligence is really a certain kind of adaptability to your environment. But there's no such thing as general adaptability. Features that let you adapt to the sea will be liabilities if you find yourself on land. Some combinations of adaptation may be compatible, but many will be are antagonistic.

The realistic scenario is that AIs will just join our society, occupying their own place on the spectrum of specialization.


This seems to come up in every discussion of intelligence here on HN. General intelligence isn't a thing, it is a statistical measurement of the correlations between specific intelligences (11 from memory). It is rather useful as it allows you to make quite good predictions about many areas of human behaviour.


Growth rates in nature are never exponential. They are sigmoidal. Sigmoids look very exponential when you're in the middle of them, but we are starting to see the level of moore's law (yeah yeah, it's technology not ICs, still sigmoidal)

So, yeah, we're not too far off from the end of the road for silicon lithography. We've had quite a good run, and we've advanced so, so far from the humble beginnings 60 years ago.

With each generation of chips, with ever shrinking process size, the designs get harder and harder. At some point soon, we're going to decide that we just can't improve this technology any further. That silicon-lithography based computers just aren't going to get any better.

So what happens then? What happens to Intel, and all the other semiconductor vendors when this year's newly released chips aren't any better than last year's chips?

Is the market going to accept that? Will it be OK for Apple to say to everyone that the iPhone 20 (or whatever) is as good as it is going to get, and no new whizzy features are going to be implemented?

I think the investors will file lawsuits, and all the heads of the technology companies will be replaced with people who are going to try harder, and use some other, better, technology instead to keep the profits rolling in.

And what is that going to be? Molecular nanotechnology. Precise placement of individual atoms to create materials and structures with superior properties to what we have now.

And that will enable a whole bunch of things, including AGI.


Do you happen to have a source for your point about sigmoidal growth? I'd like to read more about that.


It's more correctly known as a logistic curve. The Wikipedia page has quite a few references: https://en.wikipedia.org/wiki/Logistic_function


It isn't too hard to be off an order or two of magnitude when dealing with exponential increases, no reason advances could or couldn't level off at 1% or 1000% of human capacity.


I am not sure that just hoping computation growth will level off at some threshold below human level is wise planning.It is like assuming that a singularity level intelligence will be nice and so saying we don’t need to think about the consequences.


I've yet to see any evidence that human-like intelligence and computation speed are related in any way. There's a lot more to general intelligence than how fast numbers can be multiplied together.


Actually in humans raw computational speed (nerve impulse speed) and intelligence (g) are quite correlated.

We have a rough idea of how much processing the human brain is capable of by looking at the eye and optic nerve. Our most powerful computers are many orders of magnitude below human level processing.


>Our most powerful computers are many orders of magnitude below human level processing.

How can this be true? Human brain consumes just so much energy, our chips are already running close to single-electron level switching, and consume comparable amounts of energy (not even talking about computer clusters/supercomputers). May be layouts/programming are not good enough, but bare computing power is there.


Human brains are far more efficient than computers. We tend to forget how efficient human brains are. You might find this post interesting [1].

1. http://chrisfwestbury.blogspot.co.il/2014/06/on-processing-s...


This post lost me at last * 1000 - number of connection should not be counted. I.e. number of units x frequency gives you FLOPS. With this correction it's just 20 gigaflops for the human brain.

Modern deep neutral network demonstrate that they do comparable decisions with relatively little power. (i.e. modern speech and image recognition running on laptops, for example).

So I say for modern computers it's all about correct programming.


Well I am not sure what to add. Rather than arguing about FLOPS we should really be looking at how well computers can meet or beat humans in some activity. On the things we are bad at computers are already out in front, but at the thing our brains are good at (vision for example) they have a long way to go.


Limitless exponential growth doesn't happen in the physical world. Technological advance happens on s-curves.



That's the natural world, I'm talking about technology.


Limited by a lot of things that processors are already getting awfully close to - quantum interference and the size of the architecture. It really can't get that much smaller.


We know that we are not going to get anywhere near human level intelligence using current technology so it really doesn’t matter. What is important is there anyway of building the computational power of human level intelligence other than making a baby.


Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.

???!!!

Isn't this like gun control all over again?! You give more guns to people so that they can be safe, instead you end up killing each other.


Why is guns the only technological comparison? Replace guns in your statement with mobile phones or access to the internet.


It might be an incredibly bad idea to have multiple AGIs everywhere in the world, but that's the least bad that I can see too.

Also this is amazing, making serious effort towards AGI is what we need. We'll play with RNNs configurations for a long time, but I think it's a good call to fund people who think about the broad picture.


I think that depends on how you want to model the danger of AGIs. Nukes are dangerous but the least dangerous version of nukes doesn't seem like nukes "everywhere in the world".

Of course, as a pure hypothetical, it's virtually impossible to come up with a definite danger-model for AGIs.


This is great news. I work at a big company with advanced machine learning tools and infrastructure. Everytime I use them I am amazed by the tools but kinda sad for the students and researchers who have to deal with simpler/less powerful tools. This gives me hope that the best tools will eventually be open source.


Remember, when Engines Of Creation and Nanosystems were published, and there was a great fear that uncontrolled Nanotech development would result in a GreyGoo that would consume us all?

With stuff like CRISPR, perhaps Elon should invest to stop the zombie apocalypse. :)


As awesome as this looks, I'm totally missing the point.

If they truly believe AI is dangerous, how does promoting / accelerating it is supposed to help?

Or is it a way to commoditize R&D in machine learning so that it will never be a bottleneck for startups?


It's incredible the types of doomsday scenarios the wealthy invest in stopping. The problem Elon Musk and Y Combinator are going to solve with their money, what they will be remembered for fixing after their companies have long crashed and gone bankrupt, is better technology. Essentially, technology will become so good at doing human's work we will run out of problems for people to solve and drift into a lazy non-working state incompatible with current economies. I predict Earth will be destroyed by passing meteor before that happens.

Maybe if I was a billionaire I'd understand.


"Security through secrecy on technology has just not worked very often."

Nuclear weapons come to mind. Would we prefer that the knowledge of how to make them be more widespread?


That knowledge is very widespread.


One example of a evil AI is the fact that the power of all the data out there about people,things and the ability to learn from it allows connections to be made like never before. With the right algorithms it should be possible to even make policy shifts that can impact human lives in a manner that is favourable to the 'super power' Govt. or company who owns this.


Humans are general purpose animals and technology is general purpose humankind.

If we believe that DNA is a kind of information and our genes are "looking for" better weasels to survive through then it's only natural to also see technology as a much better carrier of that information than us.

The problem many have with coming to grasp with the idea that AI could be a threat is because they look at where technology is right now and then try and imagine a computer being anywhere near our capabilities.

But this is because many think of it as a thing. As in. "Now we have finally build a strong AI thingiemagick". However just as humans consciousness and intelligence isn't a thing, neither will AI be. It's going to be a lot of things. Some are better developed than others, but most moving at impressive speed and at one point enough of them are going to be put together to create some sort of pattern recognizing feedback loop with enough memory and enough smart sub-algorithms to became what we would consider sentient. </tinfoil hat>


The OP seems to assume that the big danger with AI is that it will leave the people at the mercy of an (human) elite that controls an AI or that has programmed an autonomous AI (an AI not controlled by any humans) to care mostly or only about the elite.

In contrast, what organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute (MIRI and FHI) consider the main danger (and have considered the main danger for over 11 years) is that the AI will not care about any person at all.

For the AI to do an adequate job of protecting human welfare it needs to understand human morality, human values and human preferences -- and to be programmed correctly to care about those things. Designing an AI that can do that is probably significantly more difficult than designing an AI that is so intelligent that the human race cannot stop it or shut it down (although everyone grants that designing an AI that cannot be stopped or shut down by, e.g., the US military is in itself a difficult task).

The big danger in other words seems to come not from a research group using AI research to try to take over the world or to gain a persistent advantage over other people, but rather from a research group that means well or at least has no intention to be reckless or to destroy the human race, but ends up doing so by having an insufficient appreciation of the technical and scientific challenges around protecting human welfare, then building an AI that is so smart that it cannot be stopped by humans (including the humans in the other AI research groups).

I fail to see how changing the AI-research landscape so that more of the results of AI research will be published helps against that danger. If one team has 100% of the knowledge and other resources that it needs to build a smarter-than-human AI (and has the will to build it) and all the other teams have 99.9% of the necessary knowledge, there might not be enough time to stop the first team or (more critically IMHO) to stop the AI created by the first team. In particular, if the first AI is able to build (e.g., write the source code for) its own successor -- a process that has been called recursive self-improvement -- it might rapidly become smart enough to stop any other smarter-than-human AI from being built (e.g., by killing all the humans).

Rather than funding a non-profit that will give away its research output to all research groups, a better strategy is to give the funds to MIRI who for over 11 years have been exhibiting in their writings an vivid appreciation for the difficulty of creating smarter-than-human AI that will actually care about the humans rather than simply killing them because they might interfere with the AI's goal or because the habitat and the resources of the humans can be repurposed by the AI.

Any effective AI -- or any AI at all really -- will have some goal (or some set or system of goals, which for brevity I will refer to as "the goal") which may or may not be the goal that the builders of the AI tried to give it. In other words, everything worthy of the name "mind", "intelligence" or "intelligent agent" has some goal -- by definition. If the AI is powerful enough -- in other words, if the AI is efficient enough at optimizing the world to conform to the AI's goal -- then all humans will die -- at least for the vast majority of possible goals one could put into a sufficiently powerful optimizing process (i.e., into an sufficiently powerful AI). Only a very few, relatively complicated goals do not have the unfortunate property that all the humans die if the goal is pursued efficiently enough -- and learning how to define such goals and to ensure that they are integrated correctly into the AI is probably the most difficult part of getting smarter-than-human AI right.

That used to be called Friendliness problem and is currently usually called the AI goal alignment problem. The best strategy on publication is probably to publish freely any knowledge about the AI goal alignment problem, while keeping unpublished most other knowledge useful for creating a smart-than-human AI.

I will patiently reply to all emails on this topic. (Address in my profile.) I do not get a salary from FHI or MIRI and donating to FHI or MIRI does not benefit me in any way except by decreasing the probability that my descendants will be killed by an AI.


While we're at it maybe we should address the possibility of overpopulation on Mars?

Andrew Ng thinks people are wasting their time with evil AI:

https://youtu.be/qP9TOX8T-kI?t=1h2m45s


Musk calls that a "radically inaccurate analogy": http://lukemuehlhauser.com/musk-and-gates-on-superintelligen...

AI luminary Stuart Russell also takes on this analogy in this presentation: https://www.cs.berkeley.edu/~russell/talks/russell-ijcai15-f...

>OK, let’s continue [the overpopulation on Mars] analogy:

>Major governments and corporations are spending billions of dollars to move all of humanity to Mars [analogous to the billions that are being spent on AI]

>They haven’t thought about what we will eat and breathe when the plan succeeds

>If only we had worried about global warming in the late 19th C.


I think they're fully aware of this. It's just that Google (and in to a lesser extent Facebook) are so ridiculously ahead of everyone else when it comes to AI that all the competition can do in the meantime is brand AI as a dangerous evil in the near future (like Musk does) or bad for privacy (Apple). No doubt that when they catch up with Google and Facebook, these dangers of AI will be conveniently forgotten.


> While we're at it maybe we should address the possibility of overpopulation on Mars?

We've already fucked this planet so I sincerely hope a few people are thinking of ways to avoid fucking another one.


I can't imagine we could make Mars more inhospitable than it already is. And, whatever technologies we'd have to develop to live on that planet, would forever be in our toolkit to reverse the damage we've done here, and prevent future damage there.


This is starting to sound like the OSAF that tried to build a cross-platform open-source email/calendar/notes application for the betterment of the world - in competition with Microsoft and whatever other large corporations were doing PIMs at the time.

https://en.wikipedia.org/wiki/Open_Source_Applications_Found...


Admittedly, this parallel is based mostly off reading the biography of the project: Dreaming in Code.


(Sorry, this is a bit rambling. Hopefully it'll still be interesting to some of you. Have had a few pints at this point...)

EDIT: Actually, this is nearing "crazy" levels. Just ignore unless you really enjoy stream-of-consciousness. Sorry about this, HN! :)

I know I'm really late to the party here, but there's a premise in this whole discussion that I'm not sure I understand.

Why should we prevent AI from taking over? I mean, I "get it"... it wouldn't be HI and that feels kind of weird, but what's objectively special about HI? Why are we treating "HI==good" as axiomatic? I mean even us tribal, overly-emotional (&c) humans value DI (Dog Intelligence) even if we're pretty sure that it can't contemplate the fact that we're all made of the remnants of supernovae. There's no evidence as of yet that a greater intelligence a) exists[1], even in principle, or b) would be any less benevolent towards us. Perhaps they would even create nice little simulations for us to exist in. Though, I wonder what the purpose of my simulation is, given current circumstances :).

Yes, a transition from HI->AI would inevitably lead to a lot of human death (unless we're talking really out-there take-over plans involving disease and such), but would AI really be worse? And for whom and why? Humans themselves have caused a lot of death and we seem to value ourselves pretty highly overall (and undeservedly, IMO).

It might be that HI is the "end of the road" just like the Turing Machine appears to be the end of the road in terms of what you can compute... but not in terms of how fast you can compute it. Would "faster" automatically mean "better" (see footnote)? I dunno.

[1] The existence of a "higher" ("faster" is probable) intellige is a interesting question. How would you judge such a thing? Is there more "power"[2] to be gained through something other than being able to reflect at yourself? AFAIUI self-reflection is one of the distinuguishing features of intelligence, but given that we're "better" than Chimps -- who have an idea of "self", thus self-reflection -- it may not be the decider. And even so, such reflection is still subject to Physics and thus without "free will".

[2] Not just faster, but "better", in some non-linear sense.


So, capitalism is all about exploiting unfair advantages, right? First mover's advantage on AI developments (regardless of whether they're made publicly accessible eventually or not) seems like a pretty big unfair advantage.

Good for them. I expect some great work to come out of this. :) I'm most excited to automate travel as quickly as possible --- too many people die each year from automobile accidents.


It is really sad that the people funding these excellent researchers have no fucking clue about AI (they know plenty about other things and have done plenty of good work, but the level of nonsense here is striking). Thank God they are giving the money to competent people instead of doing what they think they are doing with it.


Here is my prediction, after watching "Terminator Genisys". The dangerous AI is not the AI humans "invent". It is the AI that runs away. But, what AI will humans allow to "run away"? What scenario presents an opportunity for one group of humans to attack an AI and then provides the opportunity for said AI to be released from its lease ("run away") and become that which we all fear?

So, you have 'red team' and 'blue team'. Blue team is super rich and builds itself an awesome AI. Red team needs some "rally round the flag" pick me up and so, looking around for targets, decides that attacking a bunch of machines is a safe bet. If they win, awesome. If not, then they didn't kill any persons, just made a bunch of junk.

Blue team's response is to internalize the threat (as is only natural, or is at least politically expedient to some subset of blue team) and frame the situation as follows: "This is what we built our AI for. This is an existential threat. It has the capacity. We only need to let it off the leash. The choices are 'destroy' or 'be destroyed'. This is nothing less than an existential moment for our civilization."

And, with that horrible, non-technical, propaganda riddled rationalization the AI developed by the most well meaning of people will be let off the least, will run way, and nothing that we know about the AI up to that point will be worth diddly squat.

I respect anyone that tries to tackle this issue. But, the nature of the issue, the kernel of the problem, is nothing less than Pandora's box. We won't know when it is opened. But, the AI will.


The near term danger of AI, as I point occasionally, is a Goldman Sachs run by an AI. Machine learning systems are already making many investment decisions. We're not far from the day when society's capital allocation is controlled by programs optimizing for return over a few years, and nothing else.


Is the Steven Levy who wrote this the same Steven Levy who wrote Hackers?



Yes


> unconstrained by a need to generate financial return

AI should definitely be constrained by financial means. Computing, unbounded by financial constraints, will eat everything.


>How Elon Musk and Y Combinator Plan to Stop Computers from Taking Over

Well, for Y Combinator is easy: by ensuring funding goes to "Uber for X" and "Facebook for Y" startups instead of real technology advancing businesses

/s


So all this will end with the red open-source AI Jaeger mech battling against the grey corporate AI Jaegers among the ruins of our cities. Thanks, Elon.


I want computers to take over.


Fix my autocorrect


At the same time couldn't this just make it easier for rogues to fork?


Computers already took over.


"OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits"

As opposed to (almost) the entire startup ecosystem which is focused on ... profits.

Edit: And what does "to much power" even mean other than trying to use hyperbole to make some kind of point.


You missed the part where they say OpenAI is incorporated as a non-profit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: