Hacker News new | past | comments | ask | show | jobs | submit login
Open Philanthropy Project awards a grant of $30M to OpenAI (openphilanthropy.org)
236 points by MayDaniel on March 31, 2017 | hide | past | favorite | 173 comments



There's lots of concern about the bizarre relationship disclosure. But perhaps even more bizarre is that this deal has a structure closer to a strategic move than actual philanthropy. Am I massively misreading this?

This page details how their main goal with the $30M isn't to increase OpenAI's pledged funds by 3%, thereby reducing the marginal "AI Risk" by less than 3%. The goal is to have a seat on the board (basically -- they use a lot more words to say this in the announcement). What on earth is going on where a charitable organization with Open in its name feels it needs to buy its way onto the board of a prominent non-profit in order to:

"Improve our understanding of the field of AI research"

"[get] opportunities to become closely involved with any of the small number of existing organizations in “industry”"

and "Better position us to generally promote the ideas and goals that we prioritize"

Isn't the whole point of "open philanthropy" that you can direct funds to organizations more open about what's going on?!


Scroll to the end. This is a $30M grant to the guy's roommate and future brother-in-law.

Unbelievable.


I scrolled to the end and can see why someone would question that. But framing it as a personal grant is misleading. If we're going to keep telling users that on HN we want them to make substantive points thoughtfully and apply the principle of charity, it's only fair to say so here as well. We like pith, but fairness more.

(In case anyone is curious, I don't know anything about the story; I just saw it on HN.)


I don't think the principle of charity applies when someone is making foundation grants to benefit their close relatives and roommates. The principle of shenanigans kicks in.


Do you/YC have any personal or financial connection to the "Open Philanthropy Project"? Just curious.


I don't, and only hear of these things when they show up on HN. No idea about YC but I doubt it.


On the other side, OpenAI was previously also funded by YC: http://blog.ycombinator.com/openai/


Thanks.


I don't know if this counts, but YC funds the Centre for Effective Altruism and 80,000 Hours. CEA, 80k, and OPP are all part of the "effective altruism" ecosystem, and it is not hard to find personal connections between them. (For example, CEA recently launched "EA Funds", which are managed by employees of OPP.)

The connection is not direct. There may be more direct connections between YC and OPP, but I am not aware of any.


Here's the other perspective – if I'm working at Open Philanthropy, I'd be wanting to get to know people who run organizations I'm likely to be really interested in.

So, it's natural that I'd be expecting there's a high chance I'd want to give to the organization they work at later on.

Just because someone has a personal connection doesn't mean it's a bad use of money – it indicates it's worth asking the question, but just because there's personal connection doesn't make it "unbelievable."

Also - not just his roommate and future brother-in-law. Also a project funded by Elon Musk, Reid Hoffman, etc. Seems pretty damn reasonable to think they would've evaluated OpenAI for a grant regardless of the personal relationships.

Hater News at its best.


It is pretty unbelievable, since it's not true. The actual quote:

>OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.

His roommates/sister's fiancé work there. That's all.


Holden is engaged to the sister of the guy he is funding. Hence, "future brother-in-law".


"Running an organization" is not the same thing as "owning an organization". Maybe at a company the CEO might have shares, but there's no equivalent at a non-profit. It's just a job that pays below market rates. If money was the goal, there's a plethora of readily-available much-higher-value opportunities that won't get sneering Internet comments.


That's not a response to my comment.


You said "sister's fiancé", which is backwards.


Fair enough, but it doesn't make much difference.


idlewords, this comment has tremendous negative utility for humanity. By pointing out obvious nepotism and corruption in the AI risk charity field, you make people less likely to donate in the future, thus increasing the likelihood of a takeover from the robots. You must delete this comment; the fate of the world depends on it!

/s


I'll believe it when I hear it from a future superintelligence that has pre-committed to simulating 3^^^^^3 copies of me.


By the time a simulation of you hears that it will already be too late.


To the company that his roommate works for as a researcher. The way you're phrasing it is pretty misleading.


Two of his roommates.


I don't see this as a problem. If I had tons of money and knew someone (by any connection) who was doing amazing work I would definitely chip in.


It's not his money, though. Also, when you invest in your friend's business, you don't get to deduct that in your taxes.


It's not a business, it's a non profit founded by Elon Musk, Sam Altman and others, with $1 billion in committments to funding.

How is this controversial? Just because some people know each other?


Not american, but it seems like you have some information that you should contribute to your country's tax collection agency. I'm sure they would be grateful.


It's not Holden's money - he works for OPP, he doesn't fund it. The money comes from Dustin Moskovitz and Cari Tuna, and it's already in a trust.


Yes, that's what I wrote.


I want to make sure I fully understand the accusation here.

You're saying that the Open Philanthropy Fund - which is funded by an $8.3 billion grant from Dustin Moskovitz and Cari Tuna, also close associates - is funneling $30M money to an organization that pays below market rates (https://www.quora.com/What-is-compensation-like-at-the-non-p...), run by people who have dedicated their professional careers and millions of dollars to philanthropic causes despite being surrounded by way more lucrative opportunities for anyone with their skillsets.

If this were the scheme, there are countless better ways to do it.

They could just give them the money without any pretense. Dustin and Cari didn't have to tie up this money in OPP. They could have skipped the years of working with Holden and others for years to identify the best giving opportunities, avoided any blowback, and just used their money the way every other billionaire does. Or, instead of just giving it away, they could have made him an absurdly compensated CEO of a new startup.

And none of that would have attracted any attention, no sneering condemnation, just business as usual.

But that's not what they did.

They've spent years painstakingly identifying the best causes they could find - anti-malarial nets, poverty relief via direct cash transfers, biosecurity, intestinal work treatment, Schistosomiasis, prison reform, and yes, AI safety. They've oriented their entire lives around this project, so of course many of the people they're close to are working on similar projects. So it really, really shouldn't be a shocking twist that one of the people they're close to might be in a position to use a small fraction of their available funds for a lot of potential good. There are fewer than 100 people working full-time on AI safety today. If you've concluded that it's an important cause area, there really aren't many options.

And even then, they didn't have to disclose their personal connection. They really could have just left well enough alone. But because they're dedicated to transparency even in the face of stupidity, they made their personal connection prominent and obvious. So now anyone on the Internet can cruise on by and - ignoring the millions donated to third-world poverty and health causes, ignoring the multitude of ways the money could have been quietly and selfishly used, ignoring the fact that non-profits invariably pay below-market rates, ignoring the copious public writing and research that's gone into these decisions - can simply gawk and say "unbelievable".

When people say "No good deed goes unpunished", this is what they're talking about.


You make a lot of points. For the moment I'll take issue with only one of them. You claim OpenAI is "an organization that pays below market rates", and give a link to support your claim. The first thing in that link is "OpenAI compensation is similar to industry compensation.", which means it's basically "market rate". Later that link says "OpenAI does not pay absolute top of market", but arguing from that sentence that OpenAI doesn't pay market rate is like saying for example Apple doesn't pay market rates because there's a hedge fund which pays more.

So I believe the source you cited indicates the opposite of what you claim it does.


I don't know - it certainly seems to me that this seriously tarnishes the credibility of Give Well, whose stated aim is to improve everyone's (not just Dustin's) charitable resource allocation.

The likelihood that this $30m is the best possible use of that money? It just happens to be that this personal connection occurs by chance? Pretty much zero. Of course all opportunities in life are down to your network, but this is pretty cut-and-dry nepotism.

If this were a totally unrelated personal investment by Dustin in a friend, it would not be seen as problematic. By investing through these supposedly impartial organisations that aim to influence everyone's behaviour, their credibility in this mission is clearly harmed.

(At least this my initial response, while allowing that this may change if a more detailed analysis shows this to be misplaced. But without this expression of mistrust, such an analysis is highly unlikely to take place, and I do not immediately see how it could fully alleviate this concern)


This personal connection did not occur by chance, but the causality you assign is reversed. Holden did not support OpenAI because his housemates work there. Rather, it is because of their similar worldviews that they live together in the first place. It is unsurprising that people who think safe AGI is a critically important investment end up in the same social circle.


Holden isn't an AGI researcher though, he's a person who's made his name arguing that some charities are much more efficient uses of money than others. Indeed when asked to review the Singularity Institute, as well as criticising the organisation itself he gave long and detailed arguments why he didn't think unfriendly AGI was a threat, was sceptical about trying to combat it through AI research and dismissed the general form of arguments about the crucial importance of donating to it as "Pascal's mugging". At best you could say he was more open-minded towards the possibility his mind might be changed on the issue than the average person.

It would be difficult to imagine that two people with very close relationships to him working for OpenAI haven't influenced his apparent change of heart; whether they've converted him to the cause by sheer force of intellectual argument or not it doesn't look great.


So holding a similar world view is enough for me to get a $30m investment to pursue research? No; holding a similar world view, being friends with, and having familial ties. This is the definition of nepotism.

Otherwise a large percentage of Hacker News should now be expecting similar investments to pursue research projects.

Look, I do not really care if somebody rich invests in somebody they know. But I do now doubt GiveWell's impartiality of analysis in other instances, and general good judgement. I will not be using their judgement to inform my charitable giving.


This isn't a GiveWell rec, it's an OPP grant. OPP literally exists to give away Dustin's money.

Go to OPP's (or Good Ventures') site and notice the glaring lack of a donate button.


"The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings. The Project is not, itself, an organization."

edit: I cannot reply to the below, so I will edit my comment.

I am aware that GiveWell does not invest money. Instead it provides impartial analysis of investment impact. My contention is that this was not impartial. Dustin can handle his own money, but:

People using GiveWell to decide their own investments should now think twice in my view. Which kind of defeats the whole point.


And then there's the next paragraph:

> The Open Philanthropy Project typically recommends grants to the Open Philanthropy Project fund, a donor advised fund at the Silicon Valley Community Foundation. Support for the Open Philanthropy Project fund comes primarily from Good Ventures, though other donors have contributed as well. In some cases, the Open Philanthropy Project makes grant recommendations directly to Good Ventures.

It's basically all GV money - that is to say, Dustin and Cari's money.


Accidental nepotism is still nepotism, and should be actively avoided.


I don't know what's more frustrating, the blatant nepotism, or the apologists for this behavior. Good grief.


Related snip:

  Relationship disclosures

  OpenAI researchers Dario Amodei and Paul Christiano are 
  both technical advisors to Open Philanthropy and live in
  the same house as Holden. In addition, Holden is engaged to 
  Dario’s sister Daniela.


Copied for convenience: "OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela."


That's awesome! Open Philanthropy reminds me of https://80000hours.org/.

In their relationship disclosure:

> OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.

This is so tangled. I don't mean it as a criticism as I'm sure a lot of SV investments would have a much longer Relationship Disclosure sections. So props to them for including this.


Well, I would use it as a criticism. This is a tangled web of like-minded people giving money to each other and calling it charity.

Some people conflate this with Effective Altruism, which I think sucks. Compared to the rigorous work done by GiveWell, there's no way to tell if this is effective, or even altruistic.

It's just people assuming that the world will be a better place if more people who think like them have money, an assumption held by basically everyone everywhere.


Holden's dedicated a huge chunk of his career to moving hundreds of millions of dollars to alleviating poverty and disease. He's one of the founders of the effective altruism movement. You may disagree with his decision here, but to dismiss his efforts, saying it "sucks" and isn't "effective, or even altruistic", while ignoring the extensive public writing he's done on the subject that led him to these views and strawmanning his position as nepotism, is just awful.

"I disagree with the arguments presented, for these reasons" - cool. If you think the grant isn't a good idea, make an argument for that.

"This person who has dedicated their life to doing as much good as possible is close to other people who also want to do as much good as possible, and their work has led to convergent viewpoints, therefore this isn't altruism" is cheap character assassination.


> strawmanning his position as nepotism

So, you obviously feel strongly about this, but let me explain why your comments are less persuasive for those of us outside this subculture:

The non-profit they donated to is (by any reading of their mission statement) an organization designed to create new technology that "will be the most significant technology ever created by humans" according to their own statements. It doesn't disburse cash or benefits to _anyone_, and actually pledges to keep some of the research secret, and "we expect to create formal processes for keeping technologies private when there are safety concerns" -- a situation the organization claims will happen, presumably regularly!

Creating influential technology is typically done for-profit, and research is typically funded in ways much less open to individual favoritism (review boards are a great anti-corruption tool), and the results of that research are typically available to (among others) the people that fund it. There is a lot about this situation that a reasonable person would describe as unusual.

In addition, all of these changes -- introducing more direct funding with less oversight, lack of access to results, lack of expectation of benefit to the targets of the charity -- all lend themselves to obscuring a fraud. That doesn't mean a fraud is present, but I'd be extremely aggressive about oversight.

What kind of oversight are we getting? Well, right now they list one of their major goals as the "tricky" goal of figuring out of they're making any progress at all.

I would not give this organization money. Dismissing these critiques as "character assassination" ignores the fact that I've only described aspects of the organization, not of the people involved, whom I have little information about.


Moreover, to add further context, the whole basis of Holden's effective altruism work has been around the idea that philanthropic dollars ought to be focused on charities with extremely rigorous proof behind how much they improve people's lives per dollar donated, and how much they need the money.

That context makes advising a donor to direct an "unusually large" sum to an organisation with an extremely vague goal and no tangible measure of progress towards it, little of the transparency demanded of other charities and existing funding commitments well in excess of their spending plans look like an extremely strange decision long before you read the disclosure statement.


> Moreover, to add further context, the whole basis of Holden's effective altruism work has been around the idea that philanthropic dollars ought to be focused on charities with extremely rigorous proof behind how much they improve people's lives per dollar donated, and how much they need the money.

This isn't quite true; SCI, a charity that treats parasitic disease in the third world, is the subject of massive uncertainty and conflicting reports of effectiveness. It might turn out that it has very little impact at all. But it's still a recommended EA charity because it looks like there's a decent chance they're doing a ton of good. GW has written extensively about this.


Technological advancement has been responsible for most of the increases in human welfare, most notably allowing us to (at least temporally) escape the Malthusian condition. It's not implausible that technical research likely to be neglected by markets and academia could be a better use of money than even the most efficient African charity. But I drink the AI-is-very-likely-to-change-everything-we-really-mean-it-this-time Koolaid, though.


OpenAI still isn't open source, or open in any other sense of the word. It is a private organization that will own any technology developed.

Again, OpenAI isn't Open.


I don't want to pick a side in this, but OpenAI does seem to have published some open source software projects on GitHub, all that I have checked under the MIT license. (https://github.com/openai?type=source ). This is not a high bar to cross, as even Microsoft or Facebook do that, but I have also not found any evidence, on their website, of projects that they have not released as free software. The fact that they are planning not to publish technologies, if there are "safety concerns", in the future, is a different matter, but then given their claims, that is exactly what they should be doing. Clearly, one could argue that in that potential future, OpenAI will become a misnomer (disregarding the issue of what actually should be done), but at the moment it appears that all of OpenAI still _is_ open source.


Nor is it AI.


I'm sure what they do falls within the rather broad field of AI. It's just not AGI, which is the thing they talk about to get people to give money to their AI projects.


It looks an awful lot like nepotism when you funnel $30M to your future brother-in-law.

Smart people are great at rationalizing, including to themselves.


It seems objectively better for some people to have access to money -- in this case the GiveWell founder. He's done great work there and now he's helping OpenAI all via a Facebook founder's billions.


"It seems objectively better for some people to have access to money" is classism in one sentence.

And I won't give people a pass because of the thing they co-founded and then forked. I can appreciate Wikipedia and believe that Larry Sanger's fork of it was dumb. I can appreciate GiveWell and think that Open Philantropy is corrupt.


And your suggestion about classism is moral relativism.

Who is corrupt? The guy giving away $8 billion or the one who founded GiveWell? And one of their focuses is criminal justice reform and trying to improve prison conditions.

If you want to look at something truly corrupt, look at the criminal justice system in the U.S.


One of my favorite lines in Mad Men was Bert Cooper telling Don Draper Philanthropy is the gateway to power.


Two organizations that exploit the implications of the word "Open" as it is used in the world of technology to market their own private companies and organizations.


The Open Philanthropy Project uses the word 'Open' to mean (http://www.openphilanthropy.org/what-open-means-us):

"Open to many possibilities ... instead of starting with a predefined set of focus areas, we’re considering a wide variety of causes where our philanthropy could help to improve others’ lives." and "Open about our work ... Very often, key discussions and decisions happen behind closed doors, and it’s difficult for outsiders to learn from and critique philanthropists’ work. We envision a world in which philanthropists increasingly document and share their research, reasoning, results and mistakes to help each other learn more quickly and serve others more effectively."

This all seems pretty useful so I don't get what your criticism is.


My criticism is that it isn't open, which is more or less absolute. Every organization is "a little" open, sharing the information that benefits them. And that seems to be what OpenAI intends to do: be open about that which suits them.


When OpenAI was announced, they mentioned having $1B in funding. Why the additional $30M?


See section 2, "Case for the grant".


I went through it but didn't see anything addressing why OpenAI needed more money.


where $1B is good, $1.03B is better?


The $1B was just promised funding, not immediate. This $30M could be immediate. This makes me believe that perhaps the original funding is being dropped due to lack of results.


Can someone explain why both orgs contain the world "Open" - I would say pretty misleading.

OpenAI hasn't released any open code or anything open.

And is OpenAI even about A.I.? (as several other here mentioned it's not AI)


OpenAI has released several open-source packages (see https://github.com/openai and https://openai.com/systems/) and several open research publications (see https://openai.com/research/)


If some of the comments I read on other AI related articles here on HN are correct,

1 mil / year per expert * 10 experts per year = 30 Mil in 3 years

Maybe $30 mil isn't as much as we think it is in AI business?


... or more than 100 top French researchers for 3 years (Research Director, top pay at the end of the career is about 6000 euros / ~10000 for the company).

[source : https://fr.wikipedia.org/wiki/Directeur_de_recherche_au_CNRS ]


1 mil / year per expert

Is this realistic?

I don't think it's impossible for a dev to be pulling $1M/yr in total comp, but it seemed more likely happen at Google or FB rather than AI.


Note that this says "per expert". I'm not sure what exactly you'd consider an "expert", but I think a reasonable definition of "expert" could result in your average expert being a million-dollar-a-year engineer.


Does a thing like a million a year engineer exist?

I feel like this number would make more sense for a pool of researchers with a strong lead than a single person.


While I obviously don't know for sure, I'm quite confident that there are a nonzero number of employees at any very large Tech Company that are engineers (ie. do development work, commit nontrivial code), and whose compensation is over $1,000,000 USD. I doubt its common, and I would expect that many of those employees are not "just" engineers (ie. at the point where you are doing work that is that valuable, its almost a certainty that you are leading a team, and you are designing things), but I'm confident they exist.


Assume for a minute that AGI is being developed and in no way shape or form does it function or is it formed in a manner that mainstream AI efforts focus on...

That hypothetical could very well be the reality on the horizon.

What of Safety/Control research that has fundamentally nothing to do with such a system or even its philosophy that the broad majority of these institutions or ventures are centered on? What of deep learning centric methodologies that are incompatible?

Safety/control software and systems development isn't a research topic. It's an engineering practice that is most suited for well qualified and practiced engineers who design safety critical systems that are present all around you.

Safety/Control Engineering isn't a 'lab experiment'. If one were aiming to secure, control and ensure the safety of a system, they'd likely hire a grey bearded team of engineers who are experts and have proven careers doing so. A particular systems design can be imparted on well qualified engineers. This happens everyday.

Without a systems design or even a systems philosophy these efforts are just intellectual shots in the dark. Furthermore, has anyone even stopped to consider that these problems would get worked out naturally during the development of such a technology?

Modern day AI algorithms and solutions center on mathematical optimization.

AGI centers are far deeper and elusive constructs. One can ignore this all to clear truth all they like.

So... If one's real concern is about the development of AGI and understanding therein, I think its fine time to admit that it might not come from the race horses everybody's betting on. As such, it is much more worth one's penny to start funding a diverse range of people and groups pursuing it who have sound ideas and solid approaches.

This advice can continue to be ignored such as it currently is and has been for a number of years. It can persist across rather narrow hiring practices....

The closed/open door will or wont swing both ways.


Reading Bloomberg news too much makes me think AI research will only be used to classify ads and make more efficient securities trading algorithms.

I will love to be proven wrong though.


" When OpenAI launched, it characterized the nature of the risks - and the most appropriate strategies for reducing them - in a way that we disagreed with. In particular, it emphasized the importance of distributing AI broadly; our current view is that this may turn out to be a promising strategy for reducing potential risks, but that the opposite may also turn out to be true (for example, if it ends up being important for institutions to keep some major breakthroughs secure to prevent misuse and/or to prevent accidents). Since then, OpenAI has put out more recent content consistent with the latter view, and we are no longer aware of any clear disagreements. "

Really, really happy to see this being carefully considered. Good job to the Open Philanthropy folks!

EDIT: That Slate Star link is amazing: "Both sides here keep talking about who is going to “use” the superhuman intelligence a billion times more powerful than humanity, as if it were a microwave or something."



I think there are more important causes than "reducing potential risks from advanced AI". Honest to god, $30M will go a long way in saving lives TODAY. Flint, MI anyone?


There are always going to be more important causes, under any particular person's view of "more important", which depends very strongly on both subjective values and (in the case of AI alignment work) on precise probabilities of far-future outcomes. A dollar given to the Against Malaria Foundation will do a lot more good, in QALY terms, than the same dollar spent in Flint. And both dollars will do more (direct) good than a dollar given in funding to, say algebraic geometry research.

Yet somehow we think it's important to fund all these things, and articles announcing new NSF grants for math research are not typically met with this kind of whatabout-ism.


Nothing about OpenAI actually addresses any real world problem. So i have a problem with their rhetoric as much as their research agenda.

Nothing they're writing about addresses any of the real world problems with how AI can or might be applied in society. They're a non-profit research lab with no clear agenda and no clear connection to how they plan to interrogate the world which seems like an important part of the equation if you care about outcomes.

So, irrespective of subjective judgements, please explain to me how any of this is supposed to help anyone?

Or, alternatively, how isn't this just free R&D for industry unshackled and unconnected to ethics or society?


I think the thesis is that most cutting-edge work is siloed in R&D departments of big players. OpenAI hopes to ensure the power of AI will be out there for any kind of organization to benefit from. Under the assumption that a more democratized AI capability is less likely to lead to an adverse outcome than a highly concentrated one.

I'm not sure I buy it, but that's what I think it is.


OK - I should have not mentioned Flint, MI. My intention is not to play-down funding for fundamental research. But really, I think the NSF would agree with the non-urgent nature of funding "reducing potential risks from advanced AI". They probably get proposals for things that are much more immediate, say algebraic geometry research that produces quantum-resistant cryptography at the similar computational complexity of existing methods.


I don't think they are just using the money for preventing Apocalypse. As Andrew Ng says worrying about this is kin to worrying about overpopulation on Mars.

Openai will probably use the money to hire world class researchers and give them tools necessary to advance AI in a meaningful way that benefits humanity.

The other big labs from Google, Facebook and MS seems to be driven by gathering as much data as possible from users, learning from it and presenting click bait ads to make more money.


I agree with what I take to be the main point of your argument, which is that OpenAI will use the money for a variety of different things, many of which aren't averting the apocalypse, and that those things are good. I think it's important to note, though, that Andrew Ng's perspective is far from a consensus one. For one thing, overpopulation on Mars is something that can easily be dealt with when it happens, whereas an intelligence explosion is much more of a flipped switch. For another, though there is a lot of uncertainty in the expert forecasts, a non-negligible amount of experts think there's a reasonable chance of human-level AI within the next 10-20 years, and that's absolutely something we should be worrying about right now. In a 2014 survey of the top 100 most cited living AI scientists[1], the median estimate of the year at which there was a 10% chance we'd have human-level AI was 2024. Multiply that by whatever chance you think human-level AI leads to an intelligence explosion and potential apocalypse, and then multiply it by the impact of that event… I think for most reasonable inputs you get a pretty huge EV as the result.

[1]: http://sophia.de/pdf/2014_PT-AI_polls.pdf


That poll is interesting, but IMHO shows the implicit meaninglessness of predictions to 10/20 years.

Do you feel we've done 30% of the work towards human level AI between 2014 and 2017, given a target of 2024?


We went from not even beating good amateurs at go to beating the best pros in the world. I don't think it's like a progress bar we should be expecting to fill at a linear pace, but, that certainly seems like a strong indicator we've come a long way (and it's one of many examples I could have chosen).


it certainly isn't a linear bar, but any improvement in AI in the last three years does not look at all as getting closer to human level intelligence.

We have just become better at some tasks that we knew how to automate for a long time, albeit badly. We don't have a clue on how human intelligence is supposed to work.


Musk replied to Ng's comment: http://lukemuehlhauser.com/musk-and-gates-on-superintelligen...

Or as someone else quipped, the moment a human steps foot on Mars, Mars has become overpopulated.


The cult of 'AI risk' is a religious belief that brooks no dissent, and unfortunately has its claws in a lot of otherwise smart people.


Ways in which it resembles a religious belief: proponents argue passionately, seem impossible to be dissuaded even in the face of ridicule, and talk about some sort of transcendence beyond the normal human condition; some get together and chant litanies.

Ways it is different: obsession over cognitive heuristics and biases, AI is not supernatural and must be human caused, behavior not totally certain (it is "risk" after all), no perks that true believers receive should their hopes and wishes come true that non-believers or apostates don't also receive nor revenge fantasies of such, no requirement for tithing, no Big Alpha divine or otherwise to represent or interpret or decree official approved beliefs (though I'll admit Big Yud can seem close), no anthropomorphizing the AI, and proponents concede evidence not faith must determine beliefs and actions so are willing to look for such.


I'm old enough to remember when "cult" didn't just mean "people who have high confidence in ideas that seem weird to me".


The cult of 'AI will be totally fine' is a religious belief that brooks no dissent, and unfortunately has its claws in a lot of otherwise smart people.


That's a strawman. People who scoff at the "AI risk" cult don't believe that "AI will be totally fine". They just believe that it doesn't deserve the undue amount of attention it gets, which distracts from other more important and related risks, such as what Cambridge Analytica did for Trump, fake news due to internet, Facebook's bubbles, etc. The belief is to solve problems we have right now, or will have in 10-20 years, not the problems we'd have in 100 years.


I've been personally told by multiple different people who scoff at AI risk that it will be totally fine, so it's not a straw man.

The median expert estimate for when we'll be 10% likely to have human-level AI is ~10 years.

AI risk research didn't receive a penny of funding until the last few years, and is still funded at way lower levels than a lot of things that have dramatically less impact.

In nearly every debate on the topic I've seen (with a few exceptions), the people concerned about AI risk have carefully considered the topic, are aware of the areas where there's still a lot of uncertainty, and make clear and well-hedged arguments that acknowledge that uncertainty; meanwhile the people who scoff at it haven't read any of the arguments (not even in popular book form in Superintelligence), haven't thought about most of the considerations, and have a general air of "assuming things will probably be fine". That's not a straw man, that's just direct observation of the state of the debate. People are doing serious academic work on the topic and have thought about it very deeply; the standard HN middlebrow dismissal is both common and inappropriate.


That number didn't pass my sniff test, so I went looking. It seems to have come from here[1], which aggregates a series of surveys.

I first opened the "FHI Winter Intelligence" report: it's an informal survey made to 35 participants of a conference, of which only 8 work on AI at all (let alone be an expert in AGI).

I then looked at the "Kruel interviews", which the site reports as giving a prediction of "2025" for 10% chance, yet reading the interviews it's quite clear that many gave no prediction at all. Also, averaging answers by people ranging from Pat Heyes to PhD students seems suspect.

Is your number based on these reports?

[1] http://aiimpacts.org/ai-timeline-surveys/


Sorry, gave a citation in another comment on the thread but not in this one. I was referencing http://sophia.de/pdf/2014_PT-AI_polls.pdf


Are they actually experts, though? From that paper:

  “Concerning the above questions, how would you describe your own expertise?”
  (0 = none, 9 = expert)
  − Mean 5.85

  “Concerning technical work in artificial intelligence, how would you describe your own expertise?”
  (0 = none, 9 = expert)
  − Mean 6.26
Also, the whole methodology of aggregating the opinions of random conference attendees seems suspect to me. Attending a conference doesn't make you an expert.


You can restrict your attention to the TOP100 group if you prefer.


Yeah, but then you just have 29 responses in total.


The word "just" feels awfully out of place in this context. If you were doing some kind of broad-based polling of public opinion, of course you'd want a bigger sample size, but 29 of the top 100 researchers in a field sounds like a hell of a good sample to me, and well worth listening to.


If they were 29 random researchers, I'd agree, but since they were voluntarily selected, not really. They try to check if the sample is biased, but it's not convincing.


From the first paragraph of Turing's famous essay on AI:

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll.


You're piling strawman on top of strawman, while suffering from confirmation bias. You think the AI risk guys are making well-hedge arguments, because you believe in that thesis.

Andrew Ng, Yann LeCun, and many other people who have ACTUALLY worked in AI are the ones who scoff at it. They don't need to make arguments because what do you respond to a young earth creationist?

All of the arguments in Supreintelligence or otherwise are simply that AI will eventually exist. The only argument for that it will come by 2050 is a badly conducted survey of non-experts.

Should we worry about all sorts of existential risk which could arrive in any undetermined time in the future?

The whole project is so absurd, it's hard even to begin to make any counter arguments, because none of the arguments make any sense.


Is this a badly conducted survey of non-experts? http://sophia.de/pdf/2014_PT-AI_polls.pdf

Edit: In particular, the TOP100 subgroup.


Yes. The other 3 groups were not AI researchers. And the TOP100 group had a response rate of 29%, which adds self-selection to the process. People who are interested in AI risk et al. are more likely to respond to such a survey, adding bias. Andrew Ng or Yann LeCun or anyone actually working in AI would have refused (and probably did refuse) the invitation. Also, this TOP100 group of people are also more likely to be GOFAI folks who pretty much have no idea about the current data-driven deep learning-based AI.

Even Stuart Russel, the only CS guy in the AI risk camp, doesn't actually believe that AGI is anywhere near. But he works on it simply because he thinks we can do solve some of the problems like learning from demonstrations instead of (possibly faulty) rewards. That's actually a core AI research topic, not a AI ethics/values/blahblah topic. Oh, and also because this allows him to have a differentiated research program, and thus directs any funding on this niche to him.


Seems pretty clear neither of us will convince the other in this venue, so I'm going to leave it here. Thanks for the discussion :)


You can't get rid of human religious longing easily. Instead of the rapture and armageddon, we have transhumanism and ai risk.


Fun fact: You can apply this template to anyone worried about any potential disaster.

"Behold the cult of global warming, who believe that the unclean practices of man will anger the sky gods and bring down furious vengeance. Sad to see so many otherwise-smart climatologists drawn in by this drivel."

"'Antibiotic resistance'? Oh my god, you must be one of those 'biosecurity cultists'. Do you really believe that magical microscopic beings are going to grow strong because we're feeding farm animals the wrong food, and that those same invisible creatures will destroy human civilization? Oh man, do you have chants? What a nutjob."

"Seriously, you're really afraid of 'nuclear weapons'? You really believe that people will somehow cast some magic spell that will turn rocks into fire and destroy entire cities, that this same magic will cause 'mutations' and a 'nuclear winter'? Can't you see that you're part of a doomsday cult?"


Thing is, we have good evidence that all those things actually happen (except maybe the nuclear winter), and that they are/were increasing.

Meanwhile, all we have from the AGI doomsday people is sci-fi stories, and actual programs that still can barely distinguish a cat from a mole even after looking at thousands of pictures.


It would be really useful if there were some historical example of a sudden increase in intelligence leading to a new class of agents taking over the Earth and determining it's future.

Or maybe if there were any evidence that AI capabilities are increasing, like becoming dominant at Chess or Go, driverless cars, or a slew of recent papers on transfer learning.

Maybe if one of the co-authors of the leading AI textbook, Stuart Russell, voiced concerns, we could count that as evidence.

But you're right, it's better to wait until we know for a fact that someone's built an agent smart enough to end civilization before we commit any resources to the problem.


Driverless cars are a great example. Thirty years after the Navlab¹, and twenty after it did a travel from Washigton to San Diego mostly by itself², we still don't have a viable model that can safely navigate even an highway. But "human-level AI" is ten year away?

To me, all these discussions sound like someone who read The Earth to the Moon in 1865 and started working on how to avoid getting humans harmed by the explosion in the barrel.

¹ https://www.youtube.com/watch?v=ntIczNQKfjQ

² https://www.youtube.com/watch?v=bdQ5rsVgPuk


Nobody is saying human-level AI is ten years away, people are saying there's some chance of it in ten years. Median 50% estimate is quite a bit longer. But a 10% chance just means divide the impact by ten, and with an event like this that still leaves you with a really damn big number.


I get that, I still find the numbers absolutely unrealistic. They're just setting it up for another AI winter.


If an AI winter is going to happen again it's not going to be from the AGI side, but from the over-hyped deep learning stuff failing to provide much business value at the enterprise level (where it's only starting to be exploited by the big players). At least compared to simple linear regression.

The AlphaGo situation is an interesting one. Did you have any predictions for when an AI would beat a top pro at Go? I didn't really learn to play until sometime in 2015 but I was amazed AIs still hadn't dominated, they weren't even close. Still I saw that with the single trick of MCTS AIs had improved a lot, and seemed to have some steady improvement year after year for a while. I don't think it would be unreasonable to have predicted that at some point with X amount of computing power an AI could be made to win. Then later that year I saw a paper that reported they were beating all the MCTS bots with a deep learning based one they were making. Immediately it seemed clear that the first person to create a fusion of deep learning + MCTS would create a very strong Go AI bot, but would it beat pros? Maybe with a year of effort by a big company using custom hardware like IBM's Deep Blue or more likely these days GPU clusters, but would it happen soon? Not for a year at least. Turns out it was already happening (AlphaGo guys started in 2014, and Facebook had a project going too) and the Fan Hui matches being announced took a lot of people by surprise. Some because their predictions were ignorant of advances in either or both of MCTS or deep learning and so still predicted many years before computers would win. I was more surprised it was done without anyone hearing about it sooner. Even so it wasn't clear it could beat Lee Sedol a few months later, since there's another big gap between lower pros and higher pros, but it did.

A couple lessons to take from AlphaGo: we don't necessarily know what's actively being worked on around the world nor how far along it is, and problems that seem insurmountable with current computer hardware can suddenly become solved with the right fusion of existing ideas that haven't yet been fused.

Black swans are hard to predict, disagreements over the predictions are totally normal and fine, I think the harder disagreement is just getting people to accept that should nothing specifically hinder it (like an extinction event from an asteroid/disease, or successfully enforced bans on AI research), AGI is inevitable at some point in humanity's future. There probably should be some research done into its safety, and since the prediction problem is so uncertain, there's no pressing reason not to start now or indeed 10 years ago. "You could have used that money for feeding Africans!" is a non-argument.


"We have to defuse this bomb or it will destroy the city in one hour!"

"But there are people suffering right now!"


Twist: the bomb is a cartoon someone drew on a napkin


Twist: there's an actual bomb, but everybody thinks you're talking about a napkin.


>I think there are more important causes than "reducing potential risks from advanced AI". Honest to god, $30M will go a long way in saving lives TODAY.

If the intelligence explosion hypothesis proves true even 100 years from now, it will have the potential to decide the fate of humanity.

Given the stakes, beginning foundational research as early as possible is prudent. $30M in the grand scheme of things isn't much at all. If you were to express that amount as a fraction of the planet's total wealth, it would amount to something utterly infinitesimal​​. Yet, the potential payoffs are immeasurable.

Philanthropy and research isn't some crude single-threaded triage process. The timescales involved, combined with the fact you can't just throw money at some problems demands a diverse approach that pursues many ends in parallel.


That's Pascal's Wager.

What makes you think that OpenAI is doing anything that has any meaningful impact on AGI? We have no idea what AGI looks like.


Many people's estimate of the probability-weighted impact OpenAI will have is much greater than anything that I'd ever call a Pascal's Wager. You may disagree, but a lot of us actually have our expectations somewhat quantified and grounded, which is very different from the argument as applied to religion.

Personally, I think there's a greater than 10% chance that we'll see AGI that definitively surpasses human ability within the next 50-100 years (growing a lot higher as we get near/past 100). And given what's coming out this early on with minimal funding, I expect that the work that OpenAI does in the near future will have at least a 10% chance of strongly influencing the direction of that AGI work during the critical turning points. A 1% or more chance of their work mattering a lot is plain old betting-on-a-black-swan territory, not Pascal's Wager.


Well put. The Open Philanthropy folks take a broadly similar view of the probabilities, both of when strong AI is likely to arrive and of how much positive influence per dollar they can have on how things turn out:

http://www.openphilanthropy.org/blog/potential-risks-advance...


You've brought up Pascal's Wager twice in this thread, as if it really can end an argument by itself. Just because you don't like what high impact / low probability EV calculations do in Theology doesn't mean you can wave away high impact / low probability EV calculations in other domains.


I agree that "that's Pascal's wager!" isn't a reasonable response to someone arguing that, say, a 1% or 10% extinction risk is worth taking seriously. If you think the probability is infinitesimally small but that we should work on it just in case, then that's more like Pascal's wager.

I think the whole discussion thread has a false premise, though. The main argument for working on AGI accident risk is that it's high-probability, not that it's 'low-probability but not super low.'

Roughly: it would be surprising if we didn't reach AGI this century; it would be surprising if AGI exhibited roughly human levels of real-world capability (in spite of potential hardware and software improvements over the brain) rather than shooting past human-par performance; and it would be surprising if it were easy to get robustly good outcomes out of AI systems much smarter than humans, operating in environments too complex for it to be feasible to specify desirable v. undesirable properties of outcomes. "It's really difficult to make reliable predictions about when and how people will make conceptual progress on a tough technological challenge, and there's a lot of uncertainty" doesn't imply "the probability of catastrophic accidents is <10%" or even "the probability of catastrophic accidents is <50%".


>>high impact / low probability EV calculations in other domains

Yeah but there isn't a calculation of the risk/probability of AGI there's just the appeal to the idea.


That's… not at all true? There have been numerous thorough attempts at coming up with as reasonable an estimate as possible, involving in-depth conversations with most of the experts in the field. Maybe it's not strictly a "calculation", but it's a hell of a lot more than just an appeal to the idea of AGI.


Except that there's no such thing as the field of "AGI" (or if there is, its a subfield of philosophy). Asking modern ML and Deep Learning researchers about their thoughts on AGI is like asking the wright brothers about a mars mission.

Sure, they're the closest thing we have to "experts", but there's not just one, but likely 5 or 10 more field, if not world-changing leaps we need to make before the technologies we have will resemble anything like AGI.

And that's ignoring the people who ascribe diety-like powers to some potential AGI. Air gap the computer and control the inputs and outputs. We can formally prove what a system is capable of. That fixes the problem.


> Air gap the computer and control the inputs and outputs. We can formally prove what a system is capable of. That fixes the problem.

Debates about superhuman AI have focused quite a lot on what it would mean to "control the inputs and outputs" while still being able to get some kind of benefit from the AI.

You can indeed formally prove that a computer will or won't do certain things, so you could use that for isolation purposes. But in order to be useful, the AI needs to interact with people and/or the world in some way. Otherwise it might as well be switched off or never have been built in the first place.

If it's really a superhuman intelligence with superhuman knowledge about the world, then interacting with people is where the risk creeps back in, because the AI could make suggestions, recommendations, requests, offers, promises, or threats. Although there are plenty of ideas about limiting the nature of questions and answers, having some kind of separate person or machine judge whether information from the AI's communications should be used or how, or limiting what the AI is programmed to attempt to do or how, none of these measures are straightforward to formally prove correct in the way that simpler isolation properties are.

If we made contact with intelligent aliens, would formal proofs of correctness of the computers through which we (say, exclusively) communicate with them guarantee that they couldn't massively disrupt our society by means of what they had to say?


Your opinion about how far off AGI is is a totally reasonable one, and some AI experts agree with it. Other AI experts disagree, and think that AGI is basically tractable with current research methods plus a breakthrough or two.

Yes, there's some additional uncertainty about whether you're even asking the right people. But you can take account of that uncertainty, both by widening your error bars and by asking people from other fields as well (including philosophers, e.g. Bostrom). What you can't do is just throw up your hands and say "it's unknowable".

This is all beside the original point, which is that these arguments are much more rigorously grounded than just a wave in the direction of AGI. Could they be better? Sure, and there are a lot of people who'd be interested in seeing some better estimates. But for now, they're the best we have, and they're a completely reasonable thing to base decisions on.

> We can formally prove what a system is capable of. That fixes the problem.

That's exactly what some of the people working on this problem are trying to do, but it's a hell of a lot harder than you make it sound. Formal methods have come a really long way, but they're not even remotely close to being able to prove that an AGI system is safe (yet).


You don't need to prove anything about an AGI system if you can prove things about its I/O capabilities. I recognize that proving things about machine learning models is very hard, I tried to do some research in that area and got practically no where.

And I think you and I have very different definitions of Rigorous. Like I said, unless you'd take Wilbur Wright's thoughts on a mars mission seriously, I don't think you should give much thought to Geoff Hinton's thoughts about when we'll get AGI, and I say that having enormous respect for him and his achievements. (and using him as a simple example)

Its pseudoscience. We're notoriously bad about predicting the future. I don't see any reason to trust people going on about the dangers of AGI any more than the futurologists of my parents generation, who predicted flying cars and interstellar travel, but missed smartphones.


> You don't need to prove anything about an AGI system if you can prove things about its I/O capabilities.

But then you can't benefit from it, or you can only benefit from it except in narrowly predefined ways.


Perhaps, but if the alternative is either incredible fear of the technology, or the technology potentially killing humanity, then "making great strides in a select few fields" seems quite good.


I really, really don't think modern AI capabilities and AGI are as dissimilar as airplanes and rockets. Neither do many AI researchers. You're obviously welcome to disagree, but you don't just get to declare their opinions invalid because you think they're out of their depth.


I don't think AI researchers are "out of their depth" (I am one myself). And yes, there is room for different opinions.

However, I believe that the ones who say sensational things about AI doomsday are the ones who are disproportionately quoted by the media.


This is also true. You see all of these things like Hawking and Musk talking about the AI Apocalypse (when they aren't even experts in the field), and it gets people scared when, at this point, there's very little reason to be.

On the other hand, I don't actually know Hinton's opinion on these things, and he might agree with me (in which case, you absolutely should listen to him!). But instead the loudest voices are perhaps the most ridiculous.

That said, I do think that if I asked you to make a bet with me on when we would approach with even 50% confidence, your error bars would be on the order of a century.


"We can formally prove what a system is capable of."

This is not true. By Rice's theorem, either the system is too dumb to be useful, or we can't prove what it can do.


This is an abuse of Rice's theorem (which seems to be getting more common).

Rice's theorem says that no program can correctly decide what an arbitrary program will do, not that no properties of programs can be proven. There are useful programs about which non-trivial properties can be, and have been, proven, and Rice's theorem is no limit on the complexity of an individual program about which a property may be proven, or the complexity of a property which an individual program may be proven to exhibit.

Usually programs with provable properties have been intentionally constructed to make it possible to prove those properties, rather than having someone come along and prove a property after-the-fact.


My argument is as follows. You're right, and I'm appealing to a stronger Wolfram-esque version of Rice, together with the fact that humans readily throw non-essential goals under the bus in pursuit of speed.

* Building AI is a race against time, and in such races, victory is most easily achieved by those who can cut the most corners while still successfully producing the product.

* As a route to general AI, a neural architecture seems plausible. (Not at the current state-of-the-art, of course.)

* Neural networks (as they currently stand) are famously extremely hard to analyse: certainly we have no good reason to believe they're more easily analysed than a random arbitrary program.

* A team which is racing to make a neural-architecture AI has little incentive to even try to make their AI easy to analyse. Either it does the job or it doesn't. (Witness the current attempts to produce self-driving cars through deep learning.) Any further effort spent on making an easily-analysable AI is effort which is wasting time that another team is using just to build the damn thing.

* Therefore, absent a heroic effort to the contrary, the first AI will be a program which is as hard as a random arbitrary program to analyse. And, as much as I hate to appeal to Wolfram, he has abundantly shown that random arbitrary programs, even very simply-specified ones, tend to be hard to analyse in practice.

(My argument doesn't actually require a neural architecture of the AI; it's just a proxy for a general unanalyseable thing.)


1. I'm not sure that I agree. Not all research is a race against time. But, perhaps you're right, I'll accept this.

2. Certainly the most plausible thing we have now, I'm not sure that that makes it plausible, but better than anything else. so okay.

3. This depends on what you mean. Neural Networks are actually significantly easier to analyze than arbitrary programs, when you essentially restrict yourself to two operations (multiplication and sigmoid or ReLU), things get a lot easier to analyze. Here are some questions we can answer about a neural network that we can't about an arbitrary program: "Will this halt for this input?", "Will this halt for all inputs?", "What will a mild perturbation of this input have on the output?", these are as a consequence of fineiteness and differentiability, which are not attributes that a normal program has. (caveat: this gets more difficult with things like RNNs and NTMs, but afaik is still true). The questions that we find difficult answer for a Neural Network are very different than for a normal program: namely "How did this network arrive at these weights as opposed to these other ones?" and related "What does this weight or set of weights represent?", but I don't think that there's any indication that those questions are impossible to answer (and often we can answer them, like for facial recognition networks where we can clearly see that successive layers detect gradients, curves, facial features, and then eventually entire faces)

4. Agreed. There's no real reason to know why it works if it works.

5. I think you can tell, but I don't think this holds.


Those arguments are plausible, and thanks for the clarification.

I just hate to see Rice's theorem interpreted as "nobody can ever know if a program is correct or not". People have been making a ton of progress on knowing if (some) programs are correct, and Rice's theorem never said they can't.


An airgapped computer is not going to be capable of taking over the world.

Further, Turing completeness is not required to be "useful". You can get to the moon without Turing completeness.


I didn't address airgappedness at all, but you may still be wrong: we use the superintelligence's output, or else why would we have created it, so we are the channel by which it affects the world. Anyway, who knows what can be done by a superintelligence which can control a stream of electrons emitted from its own CPU!


Well, exactly as much as any other cpus can do by solely controlling it's output electrons: not much. Let's not ascribe deity-like powers to things that we can understand fairly well.


Have you ever seen a CPU running a program which is trying to communicate with the outside world through its electron side-channels? As far as I can see, your argument is "no computer has ever done this, and also we understand computers well enough to know that no computer ever will". The first clause is obvious, since no computer has ever been made to do this. The second clause is assuming without proof that we will never make a superintelligent AI. Just because you don't see how to exploit the side-channels of your system, doesn't mean they're unexploitable. This is the lesson of all security research ever.


Pray explain how you could use the electrons coming out of a CPU as a side channel. I don't need anything specific, but I'd prefer something that doesn't sound like its taken out of a Heinlein novel.

You're again using terms incorrectly. A "side channel" implies that someone is listening to information that is unintentionally leaked. Unless your expectation is that this CPU is going to start side channeling our minds with the EM waves its emitting (which again, "deity-like attributes"), we'd need to be specifically listening to whatever "side channel" it uses, and it would require knowledge of and access to that side channel.

Something being able to send additional information over a side channel doesn't help unless that information is received, and so realistically, unless your hypothesis is "mind control/hacking the airwaves/whatever via sound waves the chip emanates" or similar, which are preposterous, it'll always be just as easy for the thing to transmit information via the normal channels.


A side channel is a channel through which information may leak because of the physical instantiation of an algorithm. It's not much of a stretch to include "things which let us manipulate the world" in that; do you have a better term? I thought the meaning was obvious, but apparently it's not: by "side channel" I here mean "unintended means of affecting the world by a mechanism derived from an accidental consequence of the physical implementation", by analogy with the standard "information"-related "side channel".


I think the closest conventional thing would be a sandbox escape/backdoor. Although (not that I'm an expert) I've never heard of anything close to a sandbox escape using side-channel like things. That said, like I said, most side channel attacks are either time based, or involve things like heat and power usage of the system.

The thing about all of these is that they generally allow you to get a small amount of data out that can sometimes help you with things. But again, without ascribing magic powers to the system, all the stuff that it can directly affect: power draw, temperature, disk spin speeds, monitoring LED blink speeds, noises, even the relatively insane things like EM frequency emissions can all be controlled relatively easily, and no matter how smart it is, I don't see an AGI violating physics.


I distinctly remember a paper about AI figuring out how to either get wifi or send radio waves without access to the relevant hardware. Can't find the link at the moment though :/


I expect you're referring to this article:

https://www.damninteresting.com/on-the-origin-of-circuits/


Right, because there's no way even small computers can communicate through air. Just one tiny crack is all that's needed.

And that's not even going into things the AI might say that'll convince the gatekeepers to just voluntarily let it out.


This makes me think you don't know what the word "airgapped" means in this context.


Sure, you're counting on the AI to not be able to exploit its hardware to communicate with anything. That seems like a huge assumption against a super intelligence.


Experts in the field of AI have a long history of being wrong. "AI is 20 years" away has become the standard joke in the field. There is no reason to grant expert opinion on this topic any weight.


What's your suggested alternative? It's really easy to criticize, but a lot harder to do any better. If you aren't suggesting any other methods, you're implicitly saying we should just ignore the problem. I really don't think that will end well.


My suggested alternative is prioritizing problems appropriately. This "Open Philantropy Project" is a guy taking money that used to be for curing malaria (and I respect what he used to do on that front!) and giving it to his roommate.

You can do better solving a terrible problem that does exist than solving a sci-fi problem that doesn't exist.

I know it's usually uncouth to compare different charities like this, but that's exactly what Effective Altruism was supposed to be about, and this cause directly competes with curing malaria.

If more than zero AGI technology starts existing, we can re-prioritize. It makes no sense for a field to go from not existing to superhuman performance without anyone noticing.


Nobody thinks we won't notice when it's more imminent; the concern is that capability research has always been faster than safety research. If we wait until it's imminent, we risk having no chance of catching up in time. Why not devote a tiny fraction of our funding to starting the research now, so we can be ready on time? Worst case we're early... which in this context is a lot better than being late.


Worst case the safety research ends up causing the AGI to be actively antagonistic. Slightly less worst case it never leads anywhere and AGI ends up turning us into mouthless slugs anyway. Third-ish worst it makes AGI suborn-able and whoever gets to it first becomes our god-king.

many many worse situations later we get to:

It never leads anywhere but AGI never happens anyway and it just ends up being welfare-for-future-phobes instead of curing debilitating disease or buying every homeless person in a city a new pair of shoes or whatever.

"we're early" is in fact the best possible scenario, actually it's basically the only positive scenario.


You admit that you don't know what the actual probabilities are. Or you admit you have an extremely low confidence in your estimate of the probabilities.


Have you read any of the estimates? Basically every expert gives a huge confidence interval, as do most of the people actively working on AI risk.


Saying you are 95% confident AGI will come between 50-100 years (a "huge" confidence interval), is still far too confident.


I don't recall seeing 95% confidence intervals that narrow. Even the 90% confidence intervals I see are usually like 5-150 or something. (Also it would be super weird to me if the lower bound of someone's 95%CI was 50.)


Human species extinction is something almost everyone agrees shouldn't happen.


I one person who isn't sure about that. I don't see why it wouldn't be good for us to evolve into something more advanced. Whether that's through melding with AI or advanced genetic engineering that outcome could be fantastic (assuming the transition is well managed, broadly distributed and happens through choice etc). It could lead to elimination of disease, aging, reduced ecological footprint, all kinds of benefits. And of course we've evolved before, it would be sad I think if this was the final state - I think we can do better!


I don't think people are primarily concerned about this outcome. They are more concerned about outcomes like "all the matter and energy in our light cone have been repurposed to maximize the total number of paperclips".


Pretty sure AJ007 wasn't including transhumanism in the things they were calling "human species extinction", so I doubt you actually disagree.


On the plus side it eliminates all human suffering. So from a negative utilitarian view it's a win. Unless, of course, there are more-suffering species out there that humans could eliminate suffering for.


Do you disagree with any research looking to prevent existential risk to our species, or do you believe AI cannot become an existential risk?


I think existential risks to our species of much higher magnitude exist today. Climate change and pollution for instance. I think a "possible existential threat in the future" is a weaker case for philanthropy than existing ones.


Both of those issue cannot be solved by throwing money at them, especially the first. It requires political forces.

Then again, the money could be used to buy politicians' votes in favor of environment-friendly policies. It's so blatant in the U.S. these days.


I have a simpler idea: how about they show how to stop a single AI from injuring/killing a single human?

The army already developed the capability of bombing your house based on your metadata. Once the Pentagon decides to remove humans from the loop, how are these people going to stop them? Hell, even stopping a single DDOS would be awesome.

Until they show they can do this extremely simple thing, I'll remain skeptic.


It's about expected value. A 0.0001% chance of reducing an extreme risk will have a very high expected value.


You can justify lots of things with Pascal's wager, but most of them are religions.


Following that line of reasoning to its logical conclusion there should only ever be one charity at a time which is the most important charity in the world as determined by you.


don't worry about the downvotes, those of us outside the Valley bubble agree.


Yeah I feel like it's turning AI into a bit of a joke amongst other theoretical computer scientists.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: