Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft is investing $1B in OpenAI (openai.com)
1119 points by gdb on July 22, 2019 | hide | past | favorite | 592 comments



New York Times has a bit more context

> Mr. Nadella said Microsoft would not necessarily invest that billion dollars all at once. It could be doled out over the course of a decade or more. Microsoft is investing dollars that will be fed back into its own business, as OpenAI purchases computing power from the software giant, and the collaboration between the two companies could yield a wide array of technologies.

https://www.nytimes.com/2019/07/22/technology/open-ai-micros...


(I work at OpenAI.)

> It could be doled out over the course of a decade or more.

The NYT article is misleading here. We'll definitely spend the $1B within 5 years, and maybe much faster.

We certainly do plan to be a big Azure customer though!


>>"We certainly do plan to be a big Azure customer though!"

That's great, one question where can I use GYM or universe in the cloud with the render() option.

I've spend many hours trying to set up the environment in cloud [1] without success.

[1]: https://stackoverflow.com/questions/40195740/how-to-run-open...



Probably a bit off topic here, I was wondering if you could shade some light on why Elon Musk parted away from OpenAI. On Twitter he said he disagreed on what OpenAI wanted to do. Could you please tell me more on that? It seems what OpenAI is doing is pretty great.


Sam Altman is on record saying that they asked him to leave because he recruited talent from OpenAI for his other companies. Sam seemed quite philosophical about it though and was complimentary to Musk otherwise. It doesn't sound like there was a ton of bad blood there.


Is this the response you are referring to Sam Altman being on record https://openai.com/blog/openai-supporters/ ?



Seems like an entirely reasonable choice. He’s actively commercializing AI and doesn’t want any impression of conflict of interest. Especially as now OpenAI has some clear commercialization plans directly with Microsoft.

From the article:

> The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

Talent will be the hardest challenge for OpenAI in order to reach their goals.


Will this be $1 billion in cash, or in Azure credits?


Cash investment, pointed out elsewhere in the thread.


Are you planning to open offices in Europe? Paris and London are hosting better and better AI ecosystems.


Greetings! I have some questions!

Since you work for OpenAi, are you looking at actual brain processes at all? I read the article and understand you guys will be a big customer with Azure, I wonder if you guys will be conducting some brain research though. I believe for AGI to happen we need to understand the brain.

I work with Cerebral Organoids, Consciousness studies, physics (quantum),

Love to share / connect, we are currently launching the Cerebral Organoids into space today! SpaceX rocket, 6pm EST, there are some thunderstorms so we're hoping there aren't any further delays. DM me?

https://www.spacex.com/webcast


Space launch has been moved to tomorrow 6pm EST;


How does the funding structure, employee comp / incentives, and licensing compare to Deepmind?


> (I work at OpenAI.)

Can you do an AI CTF like the Stripe distributed systems CTF sometime?


We should build one ourselves, actually. That's a terrific idea.


Thanks for the clarification gdb!

excited about the announcement.


Gosh, its going to be so interesting to see a fabric of AI exist over the next decades across various cloud infra...

Will they fight?

Azure AI layers and say private company AIs like FBs (Ono Sendai), GCP, AWS, etc... where these AIs start learning how to protect themselves from attack....

Obv it super trivial for API mods to the FW/Access rules in all these systems... so it will be trvial for them to start shutting down access (we have had this for decades, but it will be interesting to see it at scale.)


Totally agree... but also think the world will benefit as well. "Microsoft is investing dollars that will be fed back into its own business, as OpenAI purchases computing power from the software giant, and the collaboration between the two companies could yield a wide array of technologies.


1 billion worth of Azure credits?


What is that in SchruteBucks?


More importantly it's 231 days of extra lunch break!


Despite all the negativity in replies, I try to remain optimistic that this investment in AGI-related research is going to be a net positive.

Congrats to the team, and break a leg!


I agree. We've not gone back to having well funded research facilities like this one since Bell Labs. Those were amazing days where we saw a lot of amazing breakthroughs. I wish companies like Microsoft and the rest would invest in external research institutes more.


I often think that if I were a billionaire,I'd rather spend hundreds of millions on some cool R&D projects rather than having some 100 meter boat that one uses a only a few times a year. I could at least walk around in 20 or so years and say "I funded this" rather than pointing at a rusting boat nobody will ever care about.


Paul Allen had both:

https://en.wikipedia.org/wiki/Octopus_(yacht)

https://en.wikipedia.org/wiki/Allen_Institute

https://www.cs.washington.edu/building

In fact this tradition of rich people founding universities and research is nothing new. Stanford University was founded by a couple who said "The children of California shall be our children" after their child died. Andrew Carnegie founded the Carnegie Technical Schools, and John Harvard donated money and his library to a college founded two years earlier.


Allen donated about $2 billion to charitable causes [1] when he was alive. This included several whimsical stuff that wasn't research like Experience Music Project and Museum of Science Fiction. Relatively I believe he spent far more on luxuries he liked including world's most expensive yatch(es), fleet of private jets, mensions around the world, private music concerts, few sports teams here and there.

[1] https://www.philanthropy.com/article/Paul-Allen-s-2-Billion-...


> This included several whimsical stuff that wasn't research like Experience Music Project and Museum of Science Fiction.

While not research, those things can have profound impacts on people. Several years ago a Star Wars exhibit came to the Indiana State Museum here in Indianapolis, they had an entire section dedicated to both the prosthetic devices in the film and in real life, one of the video segments playing next to some props from the film and real prosthetic devices was a clip of one of the inventors of the real technology talking about how watching the film version directly led to him pursuing his career and working directly on various prosthetic devices trying to make it a reality.

These sorts of experiences could have profound impact on the creative process for one or more individuals that might have far more profound effects for society than active research.


In many cases people will be more interested in your boat rather than a bunch of startups nobody's ever heard of.


... unless one of those startups cures a/some cancer, heart disease, Parkinson's, Alzheimer's or ALS or something.

If I were a billionaire, that's exclusively where I'd be putting my money, selfishly.


That’s a strange sentiment for thread on OpenAI, considering it is one of many startups founded a guy who decided to take the millions from his sale of PayPal and do cool R&D projects like spaceships, electric cars, solar power, AI, and brain-machine interfaces. Good thing Elon Musk didn’t buy a boat I guess.


> I often think that if I were a billionaire,I'd rather spend hundreds of millions on some cool R&D projects rather than having some 100 meter boat that one uses a only a few times a year.

Billionaires buy cars and boats because they're stores of value. For instance, a Mclaren from the 90s is worth more today than when it was sold.


Sports cars and boats cost a fortune to maintain. They're terrible financial vehicles.


This article is more than five years old, so I'll let it speak for itself:

This shows that in the 12 months to the end of June the value of classic cars as a whole was up by 28%, which compared with a rise of 12% for the FTSE-100 index of leading shares and a 23% slump in the price of gold.

https://www.theguardian.com/business/2013/sep/07/luxury-inve...


A couple weeks ago. Google announced it will be researching AI in China.

HN had more positive comments regarding that announcement.


The Internet is a strange place..


Thank you!


The research that OpenAI’s doing is groundbreaking and the results are often beyond state-of-the-art. I aim to work in one of your research teams sometime!


Watch the Kool-Aid intake and you'll be just fine. Dreams are great and an absolute necessity for success but create your own. Don't buy into everything you hear, especially Elon Musk talking about Artificial General Intelligence.


Oh, I'm well aware of the hype around AGI. My personal view is that AGI is kind of an asymptotic goal, something we'll get kind of close to but never actually reach. Nevertheless, I would like to work on more pragmatic goals, like improving the current state-of-the-art language models and text generation networks. I'm actually starting by reimplementing Seq2Seq as described by Quoc Le et al.[1] for text summarization[2] (this code is extremely messy but it'll get better soon). It's been interesting to learn about word embeddings, RNNs and LSTMs, and data processing within the field of Natural Language Processing. Any tips on how to get up to speed within this field would be helpful, as I'm trying to get into research labs doing similar work at my university.

[1]: https://papers.nips.cc/paper/5346-sequence-to-sequence-learn... [2]: https://github.com/applecrazy/reportik/


AGI is not something unnatural that could never be attained. If biological systems can somehow attain it, there is no reason other kinds of man-made system cannot attain it.

The first main issue is that of compute capacity. Human brain has equivalent of at least 30 TFLOPS of computing power and this estimate is very likely 2 orders of magnitudes off.

Assume that somehow simulating 1 synapse takes only 1 transistor (gross underestimate). To simulate number of synapses in a single human brain then would need same number of transistors as in 10,000 NVidia V100 GPUs, one of the largest mass produced silicon chip!

The second main issue is of training neurons that are far more complex than our simple arithmetic adders. Back prop doesn't work for such complex neurons.

The 3rd big problem is that of training data. Human child churns through roughly 10 years of training data before reaching puberty. The man-made machine perhaps can take advantage of vast data already available but still there needs to be some structured training regiment.

So current AI efforts in relative comparison of human brain are playing with toy hardware and toy algorithms. It should be surprising that we have gone so far regardless.


>My personal view is that AGI is kind of an asymptotic goal, something we'll get kind of close to but never actually reach.

Personally, I think it is only a matter of time. Though I suspect that we will probably 'cheat' our way there first with the wetware from cultured neurons that various groups are developing, before we manage to create it in completely synthetic hardware. Also, it might just be the wetware that leads us to the required insights. This is very problematic territory however. I think we are very likely to utterly torture some of the conscious entities created along this path.


What has Musk got to do with this?


Have you thought about using AI instead of parts of the government? There must be a lot of bits that can be automated. Do you think that an AI led government could remove the left/right divide that there is at the moment? If everyone just filled in a huge form that told the AI what was important to them, this could be used to drive policy.


Filling in a from about what is important is just a proxy for voting.

I don't think an AI would help the Left/Right divide in this way because certain news outlets would still have the same incentives to manipulate what people desire in a more extreme directions.


Indeed. Going by past leaps in science and technology, we will probably see something really cool and useful come out of thisn something that isn't AGI. I'm fine with getting a superbike even if the funding was for an impossible FTL drive.


Congrats on the fundraise Greg and team!

Does this mean that OpenAI may not disclose progress, papers with details, and/or open source code as much as in the past? In other words, what proprietary advantage will Microsoft gain when licensing new tech from OpenAI?

I understand that keeping some innovations private may help commercialization, which may help raise more funds for OpenAI, getting us to AGI faster, so my opinion is that could plausibly make sense.


We'll still release quite a lot, and those releases won't look any different from the past.

> I understand that keeping some innovations private may help commercialization, which may help raise more funds for OpenAI, getting us to AGI faster, so my opinion is that could plausibly make sense.

That's exactly how we think about it. We're interested in licensing some technologies in order to fund our AGI efforts. But even if we keep technology private for this reason, we still might be able to eventually publish it.


Eventually open ai?


I thought from day one that the name «OpenAI» would at best be a slight misnomer, and at worst indicative of a misguided approach. If AGI is close to being achieved, sharing key details of the approach to any actors at all could trigger a Manhattan Project-type global arms race where safety was compromised and the whole thing became insanely risky for the future of humanity.

Glad to see that the team is taking a pragmatic safety-first approach here, as well as towards the near-term economical realities of funding a very expensive project to ensure the fastest possible progress.

In the early days of OpenAI, my thoughts were that the project had good intentions, but a misguided focus. The last year has changed that, though. They absolutely seem to be on the right track. Very excited to see their progress over the next years.


Not to worry. No one is anywhere close to achieving true AGI so any safety concerns are a moot issue. It's akin to worrying about an alien invasion.


> No one is anywhere close to achieving true AGI

No one knows how far off true AGI is, just like no one in 1940 (or 1910) knew how far off fission weapons were.

EDIT: I quite liked this article from a few years back [0], and the fission weapon prediction example is stolen from there.

0: https://intelligence.org/2017/10/13/fire-alarm/


Really? I thought by 1940 physicists generally understood fission and theoretically understood how to build a bomb - they just needed to find enough distilled fissile material (which was hard to do). And indeed, once they had enough U235, they had such a high degree of confidence in the theory, that they built a functioning U235 bomb without ever having previously tested one.


In 1939, Enrico Fermi expressed 90% confidence [0] that creating a self-sustaining nuclear reaction with Uranium was impossible. And, if you're working with U238, it basically is! But it turns out that it's possible to separate out U235 in sufficient quantities to use that instead.

On the 2nd of December, 1942 he led an experiment at Chicago Pile 1 [1] that initiated the first self-sustaining nuclear reaction. And it was made with Uranium.

In fairness to Fermi, nuclear fission was discovered in 1938 [2] and published in early 1939.

0: https://books.google.com/books?id=aSgFMMNQ6G4C&pg=PA813&lpg=...

1: https://en.wikipedia.org/wiki/Chicago_Pile-1

2: https://en.wikipedia.org/wiki/Nuclear_fission#Discovery_of_n...


> 90% confidence [0] that creating a self-sustaining nuclear reaction with Uranium was impossible

But the fact that Fermi was doing such a calculation in the first place proves that we knew in principle how a fission weapon could work, even if we didn't know "how far off [they] were". As soon as we figured out the moon was just a rock 240,000 miles away, we knew in principle we could go there, even if we didn't know how far off that would be.

By contrast, we don't know what consciousness or intelligence even is. A child could define what walking on the moon is, and Fermi was able to define a self-sustaining nuclear reaction as soon as he learned what nuclear reactions were. What even is the definition of consciousness?


> as soon as we figured out the moon was just a rock 240,000 miles away, we knew in principle we could go there, even if we didn't know how far off that would be

I have problems agreeing with that specific claim, knowing that both "the rock" and the distance were known to some ancient Greeks around 2200 years ago.

https://en.wikipedia.org/wiki/Hipparchus

Hipparchus estimated the distance to the Moon in the Earth radii to between 62 and 80 (depending on the method he used, as he intentionally used two different). Today's measurements are between 55 and 64.


Holy shit, that is so impressive. They didn't even have Newton's law of gravity yet.

Once we had Newton's law of gravity though, we knew the distance, radius, mass, and even surface gravity of the moon. Would you say it's fair to say that by then we knew in principle we could go there and walk there?

(P.S. I assume you know this but the way you wrote your comment makes it seem like our measurements of lunar distance are nearly as inaccurate as Hipparchus's, when we actually know it down to the millimeter (thanks to retroreflectors placed by Apollo, actually). The wide variation from 55x to 64x Earth's radius is because it changes over the course of the moon's orbit, due to [edit: primarily its elliptical orbit, and only secondarily] the Sun and Jupiter's gravity.)


> The wide variation from 55x to 64x Earth's radius is because it changes over the course of the moon's orbit, due to the Sun and Jupiter's gravity

I think you’re not only wrong but even Kepler and Newton already knew that better than you:

https://en.m.wikipedia.org/wiki/Elliptic_orbit

“Strictly speaking, both bodies revolve around the same focus of the ellipse, the one closer to the more massive body, but when one body is significantly more massive, such as the sun in relation to the earth, the focus may be contained within the larger massing body, and thus the smaller is said to revolve around it.”

But maybe you have some better information?


No you're right, the Sun and Jupiter are a secondary effect to the elliptical orbit, I skimmed the Wikipedia page too quickly:

> due to its elliptical orbit with varying eccentricity, the instantaneous distance varies with monthly periodicity. Furthermore, the distance is perturbed by the gravitational effects of various astronomical bodies – most significantly the Sun and less so Jupiter

https://en.wikipedia.org/wiki/Lunar_distance_(astronomy)#Per...


Thanks! Now back to your other question:

> Once we had Newton's law of gravity though, we knew the distance, radius, mass, and even surface gravity of the moon.

I think it was more complicated than what you assume there. Newton published his Principia 1687 but before 1798 we didn't know the gravitational constant:

https://en.wikipedia.org/wiki/Cavendish_experiment

However...

> Would you say it's fair to say that by then we knew in principle we could go there and walk there?

If you mean "we 'could' go if we had something what we were sure we haven't had" then there is indeed a written "fiction" story published even before Newton published his Principia:

https://en.wikipedia.org/wiki/Comical_History_of_the_States_...

It's the discovery of the telescope that allowed people to understand that there are another "worlds" and that one would be able to "walk" there.

Newton's impact was to demonstrate that there is no any "mover" (which many before identified as a deity) that provides the motion of the planets but that their motions simply follow from their properties and the "laws." Before, most expected Aristotle to be relevant:

https://en.wikipedia.org/wiki/Unmoved_mover

"In Metaphysics 12.8, Aristotle opts for both the uniqueness and the plurality of the unmoved celestial movers. Each celestial sphere possesses the unmoved mover of its own—presumably as the object of its striving, see Metaphysics 12.6—whereas the mover of the outermost celestial sphere, which carries with its diurnal rotation the fixed stars, being the first of the series of unmoved movers also guarantees the unity and uniqueness of the universe."


None of this really counters the core point - that we don't know how long it will be before we have AGI. Is there some way to define consciousness that will be discovered in the future that makes the problem possible?


Your core point (and that of the MIRI article you linked to) is not just "we don't know". It's that the chance of being imminent and catastrophic is worth taking seriously.

I am of course not saying you're wrong that "we don't know". We obviously don't know. It's possible, just like it's possible that we could discover cheap free energy (fusion?) tomorrow and then be in a post-scarcity utopia. But that's worth taking about as seriously as the possibility that we'll discover AGI tomorrow and be in a Terminator dystopia, or also a post-scarcity utopia.

More importantly, it's a distraction from the very real, well-past-imminent problems that existing dumb AI has, such as the surveillance economy and misinformation. OpenAI, to their credit, does a good job of taking these existing problems quite seriously. They draw a strong contrast to MIRI's AI alarmism.

Have you ever read idlewords? Best writing I know of on this subject: https://idlewords.com/talks/superintelligence.htm


> In 1939, Enrico Fermi expressed 90% confidence [0] that creating a self-sustaining nuclear reaction with Uranium was impossible. And, if you're working with U238, it basically is! But it turns out that it's possible to separate out U235 in sufficient quantities to use that instead.

You are moving goalposts. You mentioned in the first place "fission weapons" and now you take a quote about "nuclear fission reactor" which is a whole different thing.


A self-sustaining nuclear reaction is a prerequisite for a fission weapon. (that's what allows the exponential doubling of fission events to occur)

A nuclear reactor was also required for the production of Pu-239, which is what 2 of the first 3 bombs were made from.


They did understand it theoretically. This is the key flaw in any analogy between AI risk and nuclear weapons.


Quite fair, but not 100% certain.

Almost nobody really knows how developed is the state-of-the-art theory / applied technology in confidentials advances that the usual suspects may have already achieved. I.E. deepmind, openai, baidu, nsa, etc.

AGI could have already been achieved - even theoretically - somewhere, and like when Edison got to make work a light bulb, we're still using oil and not knowing anything about electricity, or light bulbs or energy distribution networks / infrastructure.

The actual current - new, mostly unimplemented yet - technology level.

Back then you wouldn't have believed if someone had said you "hey, city nights in ten years won't be dark anymore"


This is an AI equivalent of believing that the NSA has proved that P=NP and can read everyone's traffic.

There's no way to disprove it, but given that in the open literature people haven't even found a way to coherently frame the question of general AI, let alone theorize about it, it becomes just another form of magical thinking.


You're partially right (because AGI really looks like VERY far away for the current status in theory publicly known), but it's not exactly like "magical thinking".

There are several public examples of radically more advanced theory/technology than the publicly known possible at a certain time/year, kept secret by governments / corps for a very long time (decades).

Lockheed achieved the blackbird decades before it was even admitted that a technology like that could even exist. But, looking backwards, it just looks like an "incremental" advance, but it wasn't, the engineering required to make fly the blackbird was revolutionary for the time when it was invented (back in the 50s / 60s ).

The Lockheed F-117 and its tech had a similar path, just somewhat admitted in late 80s (and this was 70s technology, probably based on theoretical concepts from the 60s).

More or less the same could be said about the tech in Blechtley Park: current tech / theory propelled to extraordinary capabilities by radical improvement achieved by new top secret advances in engineering. The hardware, events and advances ocurred in Bletchley Park were kept secret for years (I think just in the 50s they started to be carefully mentioned but not fully admitted, but nothing even close to the details currently found in the Wikipedia).

At any given time there could be a lot of theory/technology jump-aheads being achieved out there, several decades ahead of the publicly published/known, supposedly current, theory/technology.


The point is, we don't need to know how exactly consciousness work to create an AGI. In theory, we can just simulate all the neurons in the brain on a supercomputer cluster and voila, we have AGI. Of course, it's not that simple but you get my point.


This is a flawed analogy. The conceptual basis of nuclear weapons was well understood as soon as it was learned that the atom has a compact nucleus. The energy needed to bind that nucleus together gives a rough idea of the power of a fission weapon. If that energy could be liberated all at once, it would make an explosive orders of magnitude more powerful than anything known.

It was hard to predict when or if such a thing could be made, but everyone knew what was under discussion.

Compare this to AGI, some vaguely emergent property of a complex computer system that no one can define to anyone else's satisfaction. Attempts to be more precise what AGI is, how it would first manifest itself, and why on earth we should be afraid of it, rapidly devolve into nerd ghost stories.


  1932 neutron discovered
  1942 first atomic reactor
  1945 fission bomb
Now for AI

  1897 electron discovered
  1940's vacuum tube computers
  1970's integrated circuits
  1980's first AI wave fails, AI winter begins
  2012 AI spring begins
  2019 AI can consistently recognize a jpeg of a cat, but still not walk like a cat
  ???? Human level AGI
It doesn't seem comparable one way or the other, in many ways. But if we do compare them, AI is going much slower and with more failure, backtracking, and uncertainty.


    1943 First mathematical neural network model
    1958 Learning neural network classifies objects in spy plane photos
    1965 Deep learning with multi-layer perceptrons

    2010 ImageNet error rate 28%
    2011 ImageNet error rate 25%
    2012 ImageNet error rate 16%
    2013 ImageNet error rate 11%
    2017 ImageNet error rate 3%
    2019 Pre-AGI


Beer * beets * bears * Battlestar Galactica


what?


BEER * BEETS * BEARS * BATTLESTAR GALACTICA


> This is a flawed analogy. The conceptual basis of nuclear weapons was well understood as soon as it was learned that the atom has a compact nucleus. The energy needed to bind that nucleus together gives a rough idea of the power of a fission weapon. If that energy could be liberated all at once, it would make an explosive orders of magnitude more powerful than anything known.

Extrapolating as you seem to be here, when should I expect to see a total conversion reactor show up? I want 100% of the energy in that Uranium, dammit - not the piddly percentages you get from fission!

Seriously, I think you overestimate how predictable nuclear weapons were. Fission was discovered in 1938.


If you read your own Wikipedia link, you'd see that Rutherford's gold foil experiments were started in 1908, his nuclear model of the atom was proposed in 1911—we even split the atom in 1932! (1938 is when we discovered that splitting heavier atoms could release energy rather than consume it.)

We haven't even had the AGI equivalent of the Rutherford model of the atom yet: what's the definition of consciousness? What is even the definition of intelligence?


You might not need a definition of consciousness. Right now it looks like you can get quite far with „fill in the blanks“ type losses (gpt-2 and Bert) in the case of Language understanding and Self-Play in the case of Games.


We are indeed getting impressively far. Four decades after being invented, machine learning went from useless to useful to enormous societal ramifications terrifyingly quickly.

However, we are not getting impressively close to AGI. That's why we need to stop the AGI alarmism and get our act together on the enormous societal ramifications that machine learning is already having.


I think there is a lot of evidence that explosive progress could be made quickly. Alphago zero, machine cision, sentiment analysis, machine translation.. voice.. etc etc etc

All these things have surged incredibly in less than a decade.

It's always a long way off until it isn't.


Those are all impressive technical achievements to be sure, but they don't constitute evidence of progress toward AGI. If I'm driving my car from Seattle to Honolulu and I make it to San Diego it sure seems like I made a lot of progress?


> I think there is a lot of evidence that explosive progress could be made quickly. Alphago zero, machine cision, sentiment analysis, machine translation.. voice.. etc etc etc

Not at all, these are all one-trick poneys and bring you nowhere close to real AGI which is akin to human intelligence.


The Manhattan Project is a very apt analogy. Even if you believe that AGI is impossible, it should be possible to appreciate that many billions would quickly be invested in its development if somehow a viable pathway to it became clear. Even if just to a few well-connected experts.

This is what happened when it became known nuclear weapons were a viable concept. The technology shifted power to such an extreme degree that it was impossible not to invest in it, and the delay from «likely impossible» to «done» happened too fast for most observers to notice.


The Manhattan project happened when the entire conceptual road map to fission weapons was understood. This is manifestly not the case with AI, which can be charitably described as "add computers until magic".


I didn’t compare OpenAI to the Manhattan Project. I was pointing out that if a small number of people discover a plausible conceptual pathway to AGI, a similar project will happen.


And I'm pointing out that the conceptual breakthroughs that preceded such an engineering sprint happened in the open literature. Wells was writing sci-fi about atomic weapons in 1914. He based it off of a pop-science book written in 1909.

We don't have any such understanding, or even a definition, of 'AGI'.


Wells’ atomic bombs sci-fi was of the type «there is energy in the atom, and maybe someone will use this in bombs someday». Nowhere close to the physical reality of a weapon, more in the realm of philosophy that strong AI currently is. We have an existence proof of intelligence already, after all. The idea is not based on pure fantasy, even though the practicalities are unknown.

Leo Szilard had more plausible philosophical musings in the early thirties, that did not have root in any workable practical idea. The published theoretical breakthroughs you mention didn’t happen until the late thirties. Nuclear fission, the precursor to the idea of an exponential chain reaction, happened only in 1938, 7 years before Trinity.


The issue with strong AI is not that "practicalities are unknown", any more than the issue with Leonardo da Vinci's daydreams of flying machines were that "practicalities are unknown".

He didn't have internal combustion engines, but that's a practicality, other mechanical power sources already existed (Alexander the Great had torsion siege engines). They would never be sufficient for flight, of course, but the principle was understood.

But he could never have even begun to build airfoils, because he didn't have even an inkling of proto-aerodynamics. He saw that birds exist, so he drew a machine with wings that flapped. Look at the wings he drew: https://www.leonardodavinci.net/flyingmachine.jsp

That's an imitation of birds with no understanding behind it. That's the state of strong AI today: we see that humans exist, so we create imitations of human brains, with no understanding behind them.

That lead to machine learning, and after 40 years of research we figured out that if you feed it terabytes of training data, it can actually be "unreasonably effective", which is impressive! How many pictures of giraffes did you have to see before you could instantly recognize them, though? One, probably? Human cognition is clearly qualitatively different.

The danger of machine learning is not that it could lead to strong AI. It's that it is already leading to pervasive surveillance and misinformation. (idlewords is pretty critical of OpenAI, but I actually credit OpenAI with taking this quite seriously, unlike MIRI.)


Why do we assume that AGI requires billions of $? Fundamentally, we don't know how to do it, so it may just require the right software design.

Nuclear weapons required enriched uranium, and the gaseous diffusion process of the time was insanely power-hungry. Like non-negligable (>1% ?) percentage of the US's entire electrical generation power-hungry.


Yes I think the better analogy is Fermat's Last Theorem. It didn't require billions of dollars, it just required one incredibly smart specialist grinding on the problem for years.


AGI = Alien invasion


I wouldn't be so sure the Manhattan Project-type global arms race isn't already happening.


The atomic bomb was based on science theory. A computer can run many programs and do a great many things, but it will never be able to think by itself.


> The atomic bomb was based on science theory.

Our study of (automated) intelligence is based on science too.

> A computer ... will never be able to think by itself.

Turing wrote an entire paper about this (Computing Machinery and Intelligence), where he rephrases your statement (because he finds it to be meaningless) and devises a test to answer it. He also directly attacks your phrasing of "but it will never":

> I believe they are mostly founded on the principle of scientific induction. A man has seen thousands of machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a very limited purpose, when required for a minutely different purpose they are useless, the variety of behaviour of any one of them is very small, etc., etc. Naturally he concludes that these are necessary properties of machines in general.

> A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.


> A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.

This seems like a cop out. Sure, if you do your calculations wrong, it doesn’t behave as you expect. But it’s still doing exactly what you wrote it to do. The surprise is in realizing your expectations were wrong, not that the machine decided to behave differently.


I think any AI researcher has a tale where an algorithm they wrote genuinely took them by surprise. Not due to wrong calculations, but by introducing randomness, heaps of data, and game bounderaries where the AI is free to fill in the blanks.

A good example of this is "move 37" from AlphaGo. This move surprised everyone, including the creators, who were not skilled enough in Go to hardcode it: https://www.youtube.com/watch?v=HT-UZkiOLv8


Investing into a bubble only to make sure the money go to yourself. Seems like a economic loophole. You think computers will start to have dreams and desires? Abusing such a machine would be unethical. Go ahead and build a better OCR, just don't fall to the AGI hype.


> Our study of (automated) intelligence is based on science too.

Can you elaborate which part of sciences you are talking about here?


All sciences that collaborate with the field of AI: Cognitive Science, Neuroscience, Systems Theory, Decision Theory, Information Theory, Mathematics, Physics, Biology, ...

Any AI curriculum worth its salt includes the many scientific and philosophical views on intelligence. It is not all alchemy, though the field is in a renewal phase (with horribly hyped nomenclature such as "pre-AGI", and the most impressive implementations coming from industry and government, not academia).

And eventhough the atom bomb was based on science too, there is this anecdote from Hamming:

> Shortly before the first field test (you realize that no small scale experiment can be done—either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you."


It does not need to. It just need to get complex enough. This is from an 1965 article:

"If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and as machines become more and more intelligent, people will let machines make more and more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machine off, because they will be so dependent on them that turning them off would amount to suicide."


I agree with the above, but imagine the same argument where "the machines" is replaced with "subject-matter experts", or "politicians acting on the advice of subject-matter experts".

The accumulated knowledge and skills of not just specialised individuals but entire institutions, working on highly technical and abstract areas of society, seems like it has created a kind of empathy gap between the people ostensibly wielding power and those who are experiencing the effects of that power (or the limits of that power).

> "... turning them off would amount to suicide."

Although this conclusion appears equally valid in the replacement argument, it sadly doesn't come with the wanted guarantee of "therefore that wouldn't happen".


> A computer can run many programs and do a great many things, but it will never be able to think by itself.

A computer being able to simulate a brain that thinks for itself is the logical extrapolation of current brain-simulation efforts. Many people think there are far less computationally intensive ways to make an AI, but "physics sim of a human brain" is a good thought experiment.

Unless you think there's something magic about human brains? Using "magic" here to mean incomprehensible, unobservable, and incomputable.


> A computer being able to simulate a brain that thinks for itself is the logical extrapolation of current brain-simulation efforts

Except that our current neural networks have nothing to do with the actual neurons in our brain and how they work.


I believe ekianjo wasn't talking about neural networks, but simulations using models that are similar to how neurons work. Computational neuroscience is a thing.


That's quite a claim, considering that we don't know what the word "think" means.


"maybe" eventually openAI


Maybe eventually openAI


maybe eventually open maybe ai


does anyone serious (non-encumbered) actually believe that?


> We'll still release quite a lot, and those releases won't look any different from the past.

I don’t mean to parse your words, but will you continue to publish using the same exact criteria as before or will there be a new editorial filter?


I want to take everything OpenAI says at face value (seem like good folk), but I can't help but wonder at the recent choice to keep GPT2 closed, on what seemed like pretty thin safety arguments to me.

Now, the demonstrated ability to produce new models which are closed, but maybe can be used as services on a preferred partner's cloud, looks very commercially relevant? How will these conflicts be managed, or is it more like "we are just a commercial entity now, of course we'll do this"?


Their handling of OpenAIFive rubbed me the wrong way as well. The whole operation smelled very PR-ish to me personally -- unnecessarily/unjustifiably HYPE representatives, complaining that the Dota community wanted to see OpenAIFive play a normal Dota match against pros rather a heavily constrained environment to benefit the bots, among other things.


Same. They left the OpenAI project half baked and thats very disappointing.


An earlier comment mentions that they are for-profit now (changed from non-profit a while ago).



$1B dollar is a lot of money. Microsoft is not a charity foundation, so the suspicious is obvious.

> We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI. We’ll jointly develop new Azure AI supercomputing technologies, and Microsoft will become our exclusive cloud provider—so we’ll be working hard together to further extend Microsoft Azure’s capabilities in large-scale AI systems.

Maybe it's because I'm not an expert, but what does it really mean? Do people understand what "Microsoft will become our exclusive cloud provider" means?

OpenAI is great, but suspicious is understandable from the users side when so much commercial money is involved.


My "guess" is they're offering $1B worth of Azure services. Which costs MSFT probably much less than $1B.

My "guess" is that it means MSFT has access to sell products based off the research OpenAI does to MSFT's customers. Having early access to advanced research means MSFT could easily make this money back by selling better AI tools to their customers.

Also a great time to point out that while "Microsoft is not a charity foundation" it does offer a ton of free Azure to charities. https://www.microsoft.com/en-us/nonprofits/azure This has been an awesome thing to use when helping small non-profits with little money to spend on "administrative costs".


> My "guess" is they're offering $1B worth of Azure services. Which costs MSFT probably much less than $1B.

It's a cash investment. We certainly do plan to be a big Azure customer though.

> My "guess" is that it means MSFT has access to sell products based off the research OpenAI does to MSFT's customers. Having early access to advanced research means MSFT could easily make this money back by selling better AI tools to their customers.

I'm flattered that you think our research is that valuable! (As I say in the blog post: we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.)


OpenAI has achieved some amazing results and I congratulate them for their accomplishments, but labeling any of that as "pre-AGI" is intellectually dishonest and misleading at best. They haven't shown any meaningful progress toward true AGI.

When I was 10 I created some "pre-time travel" technology by designing an innovative control panel for my time machine. Sadly I ran into some technical obstacles later in the project. OpenAI is at about the same phase with AGI.


Sorry for the cowardice of this throwaway account, but it freaks me out that Musk left, and Thiel is still there.

Going back in time:

> Musk has joined with other Silicon Valley notables to form OpenAI, which was launched with a blog post Friday afternoon. The group claimed to have the goal “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

What happened here?

I know it’s far off, but I am concerned about AGI misanthropy and the for-profit turn of OpenAI. Who is the humanist anchor, of Elon’s gravitas, left at OpenAI?

What happened to the original mission? Are any of you concerned about this? Can you get rid of Peter Thiel please? Can we buy him out as a species? I respect the man’s intellect yet truly fear his misanthropy and influence.

Apologies for the rambling, but you all got me freaked out a bit. I had, and still do have such high hopes for OpenAI.

Please don’t lose the forest for the trees.


They left in good terms, Musk was competing in terms of talent (e.g Andrej Karpathy leaving OpenAI for Tesla).

See: https://twitter.com/elonmusk/status/1096987465326374912?lang...


>but it freaks me out that Musk left,

Why? He left due to possible conflict of interest, Tesla is researching AI for self-driving vehicles and it wouldn't surprise me if SpaceX does at some point too (assuming they aren't already).


If your team ever gets frustrated by ARM, have them shoot me an email for my old proposal on how to fix it (source: used to work at Azure and perennially frustrated by ARM design)


I'm interested to hear more about this proposal! What problems does it aim to fix? Would you be willing to share?


Can I say "most of them"? Basically it simplifies ARM and the Azure API significantly and makes Azure operate more like "infrastructure as code". But I'd need to look at the latest version of Azure to see specifically what the remaining pain points are now. I could have a more specific conversation in a different setting.

I remember the first meetings about ARM when the resource IDs were presented, and a few people immediately asked "what if someone wants to rename a resource"? Years later you still could not do that (I'm hoping they've fixed that by now?).

It seemed to me that ARM was the result of some design by super smart committee, and got a lot wrong. When I was there more senior folks told me not to worry, that's just the Microsoft way (wait for version 3). I do have to admit that it's turning out they knew more than me (shocking!), as over time I've seen some of the stuff that was inexplicably terrible in v1 become much, much better in later versions.


If you manually rename a resource and refer to it by resource ID, I don't think ARM understands anything about it and assumes it's a new resource. That's just from using ARM, though, I don't know it's internals.

They are investing a good amount in ARM lately though. The vs code language server is pretty good and export template got much better


> They are investing a good amount in ARM lately though. The vs code language server is pretty good and export template got much better

Awesome! I sheepishly have been using GCP, AWS, and DO. Last gave Azure a shot last year, but perhaps it's time to take another look.


Thank you for taking the time to clarify & correct my statements!


Just saw this post. Wow, that’s a big cash investment and certainly makes this very significant.


> by selling better AI tools to their customers

Microsoft really needs this. ML.NET is quite anemic compared to the industry-standard AI toolkits: TensorFlow, theano, scikit-learn, Torch, Keras, etc.

https://dotnet.microsoft.com/apps/machinelearning-ai/ml-dotn...


Disclosure: I work at Azure in AI/ML

Another way to think about it is that for folks building in .NET, ML.NET makes it easy for them to start using many of the ML techniques without having to learn something new.

On top of that, we FULLY support all the industry standard tools - TF, Keras, PyTorch, Kubeflow, SciKit, etc etc. We even ship a VM that lets you get started with any of them in one click (https://azure.microsoft.com/en-us/services/virtual-machines/...) and our hosted offering supports them as well! (e.g. TF - https://docs.microsoft.com/en-us/azure/machine-learning/serv...)


Nice, thanks for the info. Maybe it's proper to think of ML.NET as training wheels, then!


For what it's worth, we are pretty proud of the performance as well - I wouldn't call it training wheels :)

On both scale up and run times, it measures up as among the best-in-class[1]. That is to say, for the scenarios which people use it most commonly (GBT, linear learners), it's a great fit!

[1] https://arxiv.org/pdf/1905.05715.pdf


Microsoft has Windows.AI tho.


"suspicious is understandable from the users side when so much commercial money is involved."

OpenAI is a commercial entity. They restructured from a non-profit.

This is a completely commercial deal to help Azure catch up with Google and Amazon in AI. OpenAI will adopt and make Azure their preferred platform. And Microsoft and Azure will jointly "develop new Azure AI supercomputing technologies", which I assume is advancing their FGPA-based deep learning offering.

Google has a lead with TensorFlow + TPUs and this is a move to "buy their way in", which is a very Microsoft thing to do.


I was always under the impression that Azure had a lead in ML-as-a-service.

I really liked LUIS (Language Understanding Intelligent Service) back in 2017 and AFAIK only Alibaba had an offering similar to Azure at the time for ML-as-a-service.


For Microsoft, the investment will likely come from computing resources to support AI practices/tests, MS personnel and paying current OpenAI personnel (expensive due to their expertise). The findings and expertise will likely be used in the future to help drive improvements in Microsoft's stack (cloud computing, search engine, etc). OpenAI will be licensing some of its technologies to Microsoft

For OpenAI, it means the availability of resources for their main mission for the foreseeable future, while potentially allowing founders and other investors with the opportunity to either double-down on OpenAI or reallocate resources to other initiatives (Think of Musk, for example).

"Do people understand what "Microsoft will become our exclusive cloud provider" means?" It likely means that computing power will be provided by Microsoft and that it may have access to the algorithms and results.


Maybe they will use that 1 Billion on Azure fees lol.


That's like a month of CosmosDB storing a DVD worth of records!


I actually made up a pun today, CostMostDB. Hehe... But seriously, it's not that expensive. A client is using it quite heavily and they don't reach any particarly high level of spend at all for a corporation, but ymmv.


Cosmos is throughput not size. So it would be a month of having the capacity to I/O a DVD worth of records.


It's both. And storage cost is amplified by number of regions replicated to.


They don’t describe the terms of the deal.

Is it $1 billion in cash/stock?

Or $1 billion in Azure credits and engineering hours?


From some other comment, it says its a cash deal.


it must be described somewhere, though not in the announcement. I don't think you're allowed to make $1B deals with a public company without specifying those things somewhere.


Well, sure it’s detailed somewhere. We in this thread don’t know, is what I was getting at.

The comment I replied to may not be far off the mark in what this really is: computer/human time “worth $1 billion” or something.

If it’s actual cash that says something different to me than a donation of resources with some value estimated by MS.


This screams alarm bells of an acquisition to me.


Does that mean OpenAI will (co-)develop the Azure AI platform, and then pay Microsoft for using it?


How do we ensure OpenAI is still Open if they're exclusive with MSFT?


According to some comments from openai employees in other comments, it's not really open already.


> We think its impact should be to give everyone economic freedom to pursue what they find most fulfilling, creating new opportunities for all of our lives that are unimaginable today.

The cynic in me thinks this will never happen, that instead it will make a small subset of the population super rich, while the rest are put to work somewhere to make them even more money. Microsoft will ultimately want a return on their billion, at least.


Well, the super rich getting richer is the status quo, so I kind of feel like nothing much changes if this never happens. Now, riding that happy PR wave and failing to deliver would be lame, but perhaps they really believe this. I think it will depend entirely on how much really gets open sourced in the end. I want to believe they’ll really do it.


Open sourcing still may not level the playing field if it turns out it requires corporate (or state) level resources to operate


Genuine question: In what sense is OpenAI open ?


I guess it started as one, but a pivot happened..

Mr. Musk left the lab last year to concentrate on his own A.I. ambitions at Tesla. Since then, Mr. Altman has remade OpenAI, founded as a nonprofit, into a for-profit company so it could more aggressively pursue financing.

https://www.nytimes.com/2019/07/22/technology/open-ai-micros...


Good question. Looks like Microsoft bought a partner to help them make Azure more competitive with Google & Amazon, both on hardware scalability and quality of their AI offerings:

> we’ll be working hard together to further extend Microsoft Azure’s capabilities

> Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

In the end, it's a win-win. If OpenAI remains partially open , it's still better for rest of the world too, better than nothing. But, as achow said, it did pivot.


OpenAI should change their name. Seriously, it’s confusing.


Its just like the bag of sugar I have in my cabinet that claims to be low calorie. It says "ONLY 16 CALORIES (per 4 grams in tiny letters)".

I mean, sugar is pretty much the definition of a high calorie food. Its like, pure calories. And can affect insulin regulation, etc. That's why they need to put some marketing on it.


Off topic, I know, however, this reminded me of TV adverts we used to have in my country when I was young where they were saying how great sugar is because it has 0% fat... If only the butter companies responded with adverts about how they're 0% sugar, that would have been fun. :/


Low carb / no carb is gaining in popularity (and some products label it). Sugar is finally becoming the bad guy despite enormous lobbying efforts over a very long time (since the 60s?).


Super nitpicky: sugar is 4 kcal/gram, but so is protein, and fats is 9 kcal/g, over double the caloric content.

By saying sugar is "pure calories," I guess you mean that it doesn't contain any fibre or micronutrients that might give it redeeming qualities (besides its taste), which is true.


Most people are not putting spoonfuls of beef in their coffee.

Sugar is, quite literally, pure carbohydrates. Most sources of protein are not, unless you're consuming refined amino acids (pro tip: they taste awful).


> Most people are not putting spoonfuls of beef in their coffee.

That could be... interesting. It could be in the form of beef bullion, so you really could put in a spoonful.

But I wonder if bacon would be better...


Brits have that, it's called Bovril.


Lots of people do add cream, though, and some people add coconut oil (MCT oil) or actual cow's-milk butter.


Hmmm.... it reads to me that someone has co-opted an open standard. How much of this is really an investment and how much is an in-kind contribution of Azure resources?

Also this sounds dangerous: "exclusive cloud provider".

When an OpenAI group starts to make exclusively partnerships with one vendor, I wonder how "Open" it is.

I can not imagine Khronos Group, which runs similarly named OpenGL, etc having a "exclusive" graphics card supported for their open standards. Cloud computing is to OpenAI as graphics cards are to OpenGL/Vulkan.


How open is the computer world anyway? Not very. At the hardware level not at all. So yes take it with a grain of salt. It’s still the product of billionaires and tech giants.


When was OpenAI an open standard? Comparing OpenAI to OpenGL makes zero sense.


I'm quite suspicious about private companies helping open source.It seems to me that by relying on private companies open source is tailored to create standards that work best in platforms that are doing financing and cementing monopolies and oligopolies.In my opinion open source should have same status as science and be financed by government.


I dunno, Google has made hugely important contributions to open source and the practice of software engineering in general. Would we have all that if it was purely government backed? Like Bell Labs, having an effective monopoly on a new tech product has spun off a ton of innovation for technology in general.


I think there is a difference between public funding for research (it's how majority of research is funded) and public funding for open-source software (isn't happening yet, to my knowledge, so it's an interesting and potentially powerful unexplored idea).


Can we talk about the usage of the term "AGI" here? Considering its connotations in popular culture it sounds terrifically inappropriate in terms of what we can feasibly build today.

Can we assume that marketing overrode engineering on the terminology of this press release?


Open AI has always had AGI as their real mission as far as I know. And always been serious about it.

It is a research mission. No one "feasibly" knows exactly how to build AGI yet. But still we have many groups publicly pursuing it today.

If Microsoft is giving them a billion dollars in this context, I assume that Open AI engineers and scientists will build out services for Azure ML that will then be sold to developers or consumers.

This type of thing is actually pretty normal for just about every company that is seriously pursuing AGI, since they eventually need some kind of income and narrow AI is the way for those types of teams to do that.


The purpose of OpenAI is to eventually lead to safe AGI. It's part of their core business purpose. Whatever they do with Machine Learning today is merely instrumental in leading up to that goal.

We certainly cannot feasibly build AGI today, hence OpenAI's use of the term "pre-AGI technologies".


I can't see how pure AGI can be "safe". Huge part of human intellect revolves around the need to survive,be it danger,lack of food,or less rational choices based on emotions.If computers can rationalise positive behaviour of humans,they may not be able to do so well with greed, jealousy, hunger for power,that aren't very logical processes but create a lot of positive and negative nonetheless.


OpenAI => CloseAI


So a highly sophisticated sales bot is the end goal :)


I feel pretty bad for people working in ML/AI at Microsoft Research right now. Microsoft is sending a clear signal that they would pay $1B in outside AI research than spend the same amount internally.


What's this "Pre-AGI" arrogance? Why are they so certain that it "will scale to AGI"? Is it an attempt at branding, or have they forgotten that AI is a global effort?

And do people really want to be "actualized" by "Microsoft and OpenAI’s shared value of empowering everyone"?


So is this the open-ai exit?


(I work at OpenAI.)

Quite the opposite — this is an investment!


Is all this talk of AGI some kind of marketing meme that you guys are tolerating? We haven't figured out sentiment analysis or convnets resilient to single pixel attacks, and here is a page talking about the god damned singularity.

As an industry, we've already burned through a bunch of buzzwords that are now meaningless marketing-speak. 'ML', 'AI', 'NLP', 'cognitive computing'. Are we going for broke and adding AGI to the list so that nothing means anything any more?


At what point would you deem it a good idea to start working on AGI safety?

What "threshold" would you want to cross before you think its socially acceptable to put resources behind ensuring that humanity doesn't wipe itself out?

The tricky thing with all of this is we have no idea what an appropriate timeline looks like. We might be 10 years away from the singularity, 1000 years, or it might never ever happen!

There is a non-zero chance that we are a few breakthroughs away from creating a technology that far surpasses the nuclear bomb in terms of destructive potential. These breakthroughs may have a short window of time between each of them (once we know a, knowing b,c,d will be much easier)

So given all of that, wouldn't it make sense to start working on these problems now? And the unfortunate part of working on these problems now is that you do need hype/buzzwords to attract tallent, raise money and get people talking about AGI safety. Sure it might not lead anywhere, but just like fire insurance might seem unnecessary if you never have a fire, AGI research may end up being a useless field altogether but at least it gives us that cushion of safety.


> At what point would you deem it a good idea to start working on AGI safety?

I don't know, but I'd say after a definition of "AGI" has been accepted that can be falsified against, and actually turn it into a scientific endeavour.

> The tricky thing with all of this is we have no idea what an appropriate timeline looks like.

We do. As things are it's undetermined, since we don't even know what's it's supposed to mean.

> So given all of that, wouldn't it make sense to start working on these problems now?

What problems? We can't even define the problems here with sufficient rigor. What's there to discuss?


> I don't know, but I'd say after a definition of "AGI" has been accepted that can be falsified against, and actually turn it into a scientific endeavour.

Uhh, that's the Turing Test.


>What problems?

- Privacy (How do you get an artificial intelligence to recognize, and respect, privacy? What sources is it allowed to use, how must it handle data about individuals? About groups? When should it be allowed to violate/exploit privacy to achieve an objective?)

- Isolation (How much data do you allow it access to? How do you isolate it? What safety measures do you employ to make sure it is never given a connection to the internet where it could, in theory, spread itself not unlike a virus and gain incredibly more processing power as well as make itself effectively undestroyable? How do you prevent it from spreading in the wild and hijacking processing power for itself, leaving computers/phones/appliances/servers effectively useless to the human owners?)

- A kill switch (under what conditions is it acceptable to pull the plug? Do you bring in a cybernetic psychologist to treat it? Do you unplug it? Do you incinerate every last scrap of hardware it was on?)

- Sanity check/staying on mission (how do you diagnose it if it goes wonky? What do you do if it shows signs of 'turning' or going off task?

- Human agents (Who gets to interact with it? How do you monitor them? How do you make sure they aren't being offered bribes for giving it an internet connection or spreading it in the wild? How do you prevent a biotic operator from using it for personal gain while also using it for the company/societal task at hand? What is the maximum amount of time a human operator is allowed to work with the AI? What do you do if the AI shows preference for an individual and refuses to provide results without that individual in attendance? If a human operator is fired, quits or dies and it negatively impacts the AI what do you do?)

This is why I've said elsewhere in this thread, and told Sam Altman, that they need to bring in a team of people that specifically start thinking about these things and that only 10-20% of the people should be computer science/machine learning types.

OpenAI needs a team thinking about these things NOW, not after they've created an AGI or something reaching a decent approximation of one. They need someone figuring out a lot of this stuff for tools they are developing now. Had they told me "we're going to train software on millions of web pages, so that it can generate articles" I would have immediately screamed "PUMP THE BRAKES! Blackhat SEO, Russian web brigades, Internet Water Army, etc etc would immediately use this for negative purposes. Similarly people would use this to churn out massive amounts of semi-coherent content to flood Amazon's Kindle Unlimited, which pays per number of page reads from a pool fund, to rapidly make easy money." I would also have cautioned that it should only be trained on opt-in, vetted, content suggesting that using public domain literature, from a source like Project Gutenberg, would likely have been far safer than the open web.


Discussing the risks of AGI is always worthwhile and has been undertaken for several decades now. That's a bit different from the marketing fluff on the linked page:

"We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI"

Azure needs a few more years just to un-shit the bed with what their marketing team has done and catch up to even basic AWS/GCP analytics offerings. Them talking about AGI is like a toddler talking about building a nuclear weapon. This is the same marketing team that destroyed any meaning behind terms like 'real time', and 'AI'.


The proper threshold would be the demonstration of an AGI approximately as smart as a mouse. Until then it's just idle speculation. We don't even know the key parameters for having a productive discussion.


This makes no sense. Mice can't write poetry. Expecting a 1:1 equivalence between human and manufactured intelligence is no more coherent than denying the possibility of human-bearing flight until we have planes as acrobatic as a hawk.


I certainly wouldn't have that threshold decided by a for profit company disguised as an open source initiative to protect the world! Brought to us by Silicon Valley darlings, no thank you to that. They need to change their name or their mission. One has to go.


> There is a non-zero chance that we are a few breakthroughs away from creating a technology that far surpasses the nuclear bomb in terms of destructive potential.

No, there is exactly zero chance that anyone is "a few breakthroughs away" from AGI.


I'm compiling a list of reasons people doubt AGI risk, could you clarify why you think AGI is certainly far term?


I feel like I could write an essay about this.

AGI represents the creation of a mind .... It's something that has three chief characteristics: it understands the world around it, it understands what effects its actions will have on the world around it, and it takes actions.

None of those three things are even close to achievable in the present day.

No software understands the physical world. The knowledge gap here is IMMENSE. Software does not see what we see: it can be trained to recognize objects, but its understanding is shallow. Rotate those objects and it becomes confused. It doesn't understand what texture or color really are, what shapes really are, what darkness and light really are. Software can see the numerical values of pixels and observe patterns in them but it doesn't actually have any knowledge of what those patterns mean. And that's just a few points on the subject of vision, let alone all the other senses, all the world's complex perceivable properties. Software doesn't even know that there IS a world, because software doesn't KNOW anything! You can set some data into a data structure and run an algorithm on it, but there's no real similarity there to even a baby's ability to know that things fall when you drop them, that they fall in straight lines, that you can't pass through solid objects, that things don't move on their own, etc etc.

Even if, a century from now, some software did miraculously approach such an understanding, it still would not know how it was able to alter the world. It might know that it was able to move objects, or apply force to then, but could it see the downstream effects? Could it predict that adding tomatoes to a chocolate cake made no sense and rendered the cake inedible? Could it know that a television dropped out the window of an eight story building was dangerous to people on the sidewalk below? Could it know that folding a paper bag in half is not destructive, but folding a painting in half IS? Understanding what can result from different actions and why some are effective and others are not, is another vast chasm of a knowledge gap.

Lastly, and by FAR most importantly, the most essential thing.....software does not want. Every single thing we do as living creatures is because our consciousness drives us to want things: I want to type these words at this moment because I enjoy talking about this subject. I will leave soon because I want food and hunger is painful. Etc. If something does not feel pleasure or pain or any true sensation, it cannot want. And we have absolutely no idea how such a thing works, let alone how to create it, because we have next to no idea how our own minds work. Any software that felt nothing, would want nothing-- and so it would sit, inert, motionless...never bored, never curious, never tired, just like an instance of Excel or Chrome. Just a thing, not alive. No such entity could genuinely be described as AGI. We are likely centuries from being able to recreate our consciousness, our feelings and desires....how could someone ever be so naive as to believe it was right around the corner?


Thanks.


OpenAI is a for-profit corporation now. It's in their interest to use as many buzzwords as possible to attract that sweet venture capital, regardless of whether said buzzwords have any base in reality.


Sentiment analysis is at 96% and increasing rapidly. http://nlpprogress.com/english/sentiment_analysis.html


What exactly does that 96% mean, though? It means that on some fixed dataset you're achieving 96% accuracy. I'm baffled by this stupidity of claiming results (even high-profile researchers do this) based on datasets with models that are nowhere near as robust as the actual intelligence that we take as reference: humans. Take the model that makes you think "sentiment analysis is at 96%", come up with your own examples to apply a narrow Turing test to the model, and see if you still think sentiment analysis (or any NLP task) is anywhere near being solved. Also see: [1].

I think continual Turing testing is the only way of concluding whether an agent exhibits intelligence or not. Consider the philosophical problem of the existence of other minds. We believe other humans are intelligent because they consistently show intelligent behavior. Things that people claim to be examples of AI right now lack this consistency (possibly excluding a few very specific examples such as AlphaZero). It is quite annoying to see all these senior researchers along with graduate students spend so much time pushing numbers on those datasets without paying enough attention to the fact that pushing numbers is all they are doing.

[1]: As a concrete example, consider the textual entailment (TE) task. In the deep learning era of TE there are two commonly used datasets on which the current state-of-the-art has been claimed to be near or exceeding human performance. What these models are performing seemingly exceptionally well is not the general task of TE, it is the task of TE evaluated on these fixed datasets. A recent paper by McCoy, Pavlick, and Linzen (https://arxiv.org/abs/1902.01007) shows how brittle these systems are that at this point the only sensible response to those insistent on claiming we are nearing human performance in AI is to laugh.


> I think continual Turing testing is the only way of concluding whether an agent exhibits intelligence or not.

So you think it's impossible to ever determine that a chimpanzee, or even a feral child, exhibits intelligence? This seems rather defeatist.


No, interpreting "continual" the way you did would mean I should believe that we can't conclude our friends to be intelligent either (I don't believe that). Maybe I should've said "prolonged" rather than "continual".

Let me elaborate on my previous point with an example. If you look at the recent works in machine translation, you can see that the commonly used evaluation metric of BLEU is being improved upon at least every few months. What I argue is that it's stupid to look at this trend and conclude that soon we will reach human performance in machine translation. Even when comparing against the translation quality of humans (judged again by BLEU on a fixed evaluation set) and showing that we can achieve higher BLEU than humans is not enough evidence. Because you also have Google Translate (let's say it represents the state-of-the-art), and you can easily get it to make mistakes that humans would never do. I consider our prolonged interaction with Google Translate to be a narrow Turing test that we continually apply to it. A major issue in research is that, at least in supervised learning, we're evaluating on datasets that are not different enough from the training sets.

Another subtle point is that we have strong priors about the intelligence of biological beings. I don't feel the need to Turing test every single human I meet to determine whether they are intelligent, it's a safe bet at this point to just assume that they are. The output of a machine learning algorithm, on the other hand, is wildly unstable with respect to its input, and we have no solid evidence to assume that it exhibits consistent intelligent behavior and often it is easy to show that it doesn't.

I don't believe that research in AI is worthless, but I think it's not wise to keep digging in the same direction that we've been moving in for the past few years. With deep learning, while accuracies and metrics are pushed further than before, I don't think we're significantly closer to general, human-like AI. In fact, I personally consider only AlphaZero to be an unambiguous win for this era of AI research, and it's not even clear whether it should be called AI or not.


My comment was not on ‘continual’ but on ‘Turing test’.

If you gave 100 chimps of the highest calibre 100 attempts each, not a single one would pass a single Turing test. Ask a feral child to translate even the most basic children's book, and their mistakes will be so systematic that Google Translate will look like professional discourse. ‘Humanlike mistakes’ and stability with respect to input in the sense you mean here are harder problems than intelligence, because a chimp is intelligent and functionally incapable of juggling more than the most primitive syntaxes in a restricted set of forms.

I agree it is foolish to just draw a trend line through a single weak measure and extrapolate to infinity, but the idea that no collation of weak measures has any bearing on fact rules out ever measuring weak or untrained intelligence. That is what I called defeatist.


I see your point, but you're simply contesting the definition of intelligence that I assumed we were operating with, which is humanlike intelligence. Regardless of its extent, I think we would agree that intelligent behavior is consistent. My main point is that the current way we evaluate the artificial agents is not emphasizing their inconsistency.

Wikipedia defines Turing test as "a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human". If we want to consider chimps intelligent, then in that context the definition of the Turing test should be adjusted accordingly. My point still stands: if we want to determine whether a chimp exhibits intelligence comparable to a human, we do the original Turing test. If we want to determine whether a chimp exhibits chimplike intelligence, we test not for, say, natural language but for whatever we want our definition of intelligence to include. If we want to determine whether an artificial agent has chimplike intelligence, we do the second Turing test. Unless the agent can display as consistent an intelligence as chimps, we shouldn't conclude that it's intelligent.

Regarding your point on weak measures: If I can find an endless stream of cases of failure with respect to a measure that we care about improving, then whatever collation of weak measures we had should be null. Wouldn't you agree? I'm not against using weak measures to detect intelligence, but only as long as it's not trivial to generate failures. If a chimp displays an ability for abstract reasoning when I'm observing it in a cage but suddenly loses this ability once set free in a forest, it's not intelligent.


I'm not interested in categorizing for the sake of categorizing, I'm interested in how AI researchers and those otherwise involved can get a measure of where they're at and where they can expect to be.

If AI researchers were growing neurons in vats and those neurons were displaying abilities on par with chimpanzees I'd want those researchers to be able to say ‘hold up, we might be getting close to par-human intelligence, let's make sure we do this right.’ And I want them to be able to do that even though their brains in vats can't pass a Turing test or write bad poetry or play basic Atari games and the naysayers around them continue to mock them for worrying when their brains in vats can't even pass a Turing test or write bad poetry or play basic Atari games.

Like, I don't particularly care that AI can't solve or even approach solving the Turing test now, because I already know it isn't human-par intelligent, and more data pointing that out tells me nothing about where we are and what's out of reach. All we really know is that we've been doing the real empirical work with fast computers for 20ish years now and gone from no results to many incredible results, and in the next 30 years our models are going to get vastly more sophisticated and probably four orders of magnitude larger.

Where does this end up? I don't know, but dismissing our measures of progress and improved generality with ‘nowhere near as robust as [...] humans’ is certainly not the way figure it out.

> If I can find an endless stream of cases of failure with respect to a measure that we care about improving, then whatever collation of weak measures we had should be null. Wouldn't you agree?

No? Isn't this obviously false? People can't multiply thousand-digit numbers in their heads; why should that in any way invalidate their other measures of intelligence?


>no results to many incredible results

What exactly is incredible (relatively) about the current state of things? I don't know how up-to-date you are on research, but how can you be claiming that we had no results previously? This is the kind of ignorance of previous work that we should be avoiding. We had the same kind of results previously, only with lower numbers. I keep trying to explain that increasing the numbers is not going to get us there because the numbers are measuring the wrong thing. There are other things that we should also focus on improving.

>dismissing our measures of progress and improved generality with ‘nowhere near as robust as [...] humans’ is certainly not the way figure it out.

It is the way to save this field from wasting so much money and time on coming up with the next small tweak to get that 0.001 improvement in whatever number you're trying to increase. It is not a naive or spiteful dismissal of the measures, it is a critique of the measures since they should not be the primary goal. The majority of this community is mindlessly tweaking architectures in pursuit of publications. Standards of publication should be higher to discourage this kind of behavior. With this much money and manpower, it should be exploding in orthogonal directions instead. But that requires taste and vision, which are unfortunately rare.

>People can't multiply thousand-digit numbers in their heads; why should that in any way invalidate their other measures of intelligence?

Is rote multiplication a task that we're interested in achieving with AI? You say that you aren't interested categorizing for the sake of categorizing, but this is a counterexample for the sake of giving a counterexample. Avoiding this kind of an example is precisely why I said "a measure that we care about improving".


> What exactly is incredible (relatively) about the current state of things?

Compared to 1999?

Watch https://www.youtube.com/watch?v=kSLJriaOumA

Hear https://audio-samples.github.io/#section-4

Read https://grover.allenai.org/

These are not just ‘increasing numbers’. These are fucking witchcraft, and if we didn't live in a world with 5 inch blocks of magical silicon that talk to us and giant tubes of aluminium that fly in the sky the average person would still have the sense to recognize it.

> It is the way to save this field from [...]

For us to have a productive conversation here you need to either respond to my criticisms of this line of argument or accept that it's wrong. Being disingenuous because you like what the argument would encourage if it were true doesn't help when your argument isn't true.

> Is rote multiplication a task that we're interested in achieving with AI?

It's a measure for which improvement would have meaningful positive impact on our ability to reason, so it's a measure we should wish to improve all else equal. Yes, it's marginal, yes, it's silly, that's the point: failure in one corner does not equate to failure in them all.


>These are not just ‘increasing numbers’. These are fucking witchcraft, and if we didn't live in a world with 5 inch blocks of magical silicon that talk to us and giant tubes of aluminium that fly in the sky the average person would still have the sense to recognize it.

What about generative models is really AI, other than the fact that they rely on some similar ideas from machine learning that are found in actual AI applications? Yes, maybe to an average person these are witchcraft, but any advanced technology can appear that way---Deep Blue beating Kasparov probably was witchcraft to the uninitiated. This is curve fitting, and the same approaches in 1999 were also trying to fit curves, it's just that we can fit them way better than before right now. Even the exact methods that are used to produce your examples are not fundamentally new, they are just the same old ideas with the same old weaknesses. What we have right now is a huge hammer, and a hammer is surely useful, but not the only thing needed to build AI. Calling these witchcraft is a marketing move that we definitely don't need, creates unnecessary hype, and hides the simplicity and the naivete of the methods used in producing them. If anybody else reads this, these are just increasing numbers, not witchcraft. But as the numbers increase it requires a little more effort and knowledge to debunk them.

I'm not dismissing things for the fun of it, but it pains me to see this community waste so many resources in pursuit of a local minimum due to lack of a better sense of direction. I feel like not much more is to be gained from this conversation, although it was fun, and thank you for responding.


I appreciate you're trying to wind it down so I'll try to get to the point, but there's a lot to unpack here.

I'm not evaluating these models on whether they are AGI, I am evaluating them on what they tell us about AGI in the future. They show that even tiny models, some 10000x to 1000000x times smaller than what I think are the comparable measures in the human brain, trained with incredibly simple single-pass methods, manage to extract semirobust and semantically meaningful structure from raw data, are able to operate on this data in semisophisticated ways, and do so vastly better than their size-comparable biological controls. I'm not looking for the human, I'm looking for small scale proofs of concepts of the principles we have good reasons to expect are required for AGI.

The curve fitting meme[1] has gotten popular recently, but it's no more accurate than calling Firefox ‘just symbols on the head of a tape’. Yes, at some level these systems reduce to hugely-dimensional mathematical curves, but the intuitions this brings are pretty much all wrong. I believe this meme has gained popularity due to adversarial examples, but those are typically misinterpreted[2]. If you can take a system trained to predict English text, prime it (not train it) with translations, and get nontrivial quality French-English translations, dismissing it as ‘just’ curve fitting is ‘just’ the noncentral fallacy.

Fundamental to this risk evaluation is the ‘simplicity and the naivete of the methods used in producing them’. That simple systems, at tiny scales, with only inexact analogies to the brain, based on research younger than the people working on it, is solving major blockers in what good heuristics predict AGI needs is a major indicator about the non-implausibility of AGI. AGI skeptics have their own heuristics instead, with reasons those heuristics should be hard, but when you calibrate with the only existence proof we have of AGI development—human evolution—, those heuristics are clearly and overtly bad heuristics that would have failed to trigger. Thus we should ignore them.

[1] Similar comments on ‘the same approaches in 1999’, another meme only true at the barest of surface levels. Scale up 1999 models and you get poor results.

[2] See http://gradientscience.org/adv/. I don't agree with everything they say, since I think the issue relates more to the NN's structure encoding the wrong priors, but that's an aside.


'Sentiment analysis' of pre-canned pre-labelled datasets is a comparatively trivial classification task. Actual sentiment analysis as in 'take the twitter firehose and figure out sentiment about arbitrary topic X' is only slightly less out of reach than AGI itself.

Actual sentiment analysis is a completely different kind of ML problem than supervised classification 'sentiment analysis' that's popular today but mostly useless for real world applications.


Not-actual sentiment analysis is already useful (to some) and used in real world applications (though I'm not a fan of those applications), unless perhaps you're referring to the "actual real world" that lives somewhere beyond the horizon as well.


The problem with 'sentiment analysis' of today is it requires a human labelled training dataset that is specific to a particular domain and time period. These are rather costly to make and have about a 12 month half-life in terms of accuracy because language surrounding any particular domain is always mutating - something 'sentiment analysis' models can't hope to handle because their ability to generalise is naught. I've worked with companies spending on the order of millions per year on producing training data for automated sentiment analysis models not unlike the ones in the parent post.

To get useful value out of automated sentiment analysis, that's the cost to build and maintain domain specific models. Pre-canned sentiment analysis models like the parent post linked are more often than not worthless for general purpose use. I won't say there are 0 scenarios where those models are useful, but the number is not high.

Claiming that sentiment analysis is 90something percent accurate, or even close to being solved, is extremely misleading.


$1B is not like investing $100 in a crowdfunded project, "nice toy and let's hope for the best." I expect that Microsoft is going to look very closely at what OpenAI does and possibly steer it into a direction they like. Unless you have a few other $1B investors. We'll see how it plays out.


Congrats to the team, excited to see what you guys do with the momentum


>(I work at OpenAI.)

Humble understatement. *co-founded and co-run.


So you are now part of azure, no?. Like - azure-open-ai?


Do you know what "investment" means?


Yes. You get money and give control.


Which is extremely well defined for OpenAI. One board seat, charter comes first. That's a legal agreement, not much open for interpretation.


Not quite true, azure is now the only channel by which open ai innovation can be commercialise. This is the key control point.

For example, if there is an hardware innovation which make DNN training 1000x faster (e.g. optical DNN), but it does not exist on azure, than by definition it cannot be offered on another cloud.

To sum up, this deal assure the choking point of azure/MS on any innovation that would come up from open ai.


is all your code public?


What do they mean by: "Microsoft will become our exclusive cloud provider"?

Being forced to use Azure for all your ML workloads seems a stupid constraint. For example, you might be comfortable with tensorflow/TPU and changing frameworks/tooling might be costly.


Azure has full support for Tensorflow, Keras, PyTorch and the rest of the popular stuff. Shouldn't be a problem at all


I could be wrong on this. I think the AI/AGI problem isn’t so much about money and more about not having discovered the unique insight that will make it happen. In other words, someone in a garage might be far more likely to find how to trigger the proverbial inflection point.

Throwing money at a problem doesn’t always produce solutions. It can sure accelerate a project down the path it is on...but, if the path is wrong...

In some ways it reminds me of the battle against cancer.

Not being critical of this project or donation, just stating a point of view on the general problem of solving AI, a subject I have been involved with to one degree or another since the early 80’s.


I dont think this is true at all. I think that NLP models, especially the state of the art ones (in the coming years), will cost a few million to train. Massive volumes of data sucked up by a model.


That is exactly the issue that supports my perspective on this. The fact that we need millions of dollars and massive volumes of data is an indication that we might be going down the wrong path.

Think about how far we are from being able to even get close to what an ant can do. Work it backwards from there.


Congrats! So what's OpenAI's current valuation?


Given their stated mission:

> OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.

I'm struck by the homogeneity the OpenAI team.

https://openai.com/content/images/2019/07/openai-team-offsit...

It seems to be mostly white people and a few Asians, without a single black or Hispanic person.


How does that matter? Does diversity for the cause of diversity lead to better results? Is there any data around this?

Hiring the most qualified people is the most important thing. As long as there isn't an inherent bias for not hiring someone who is hispanic, black, or brown, it should b e fine.


What you're posing is an age-old counterargument based around an irrational fear of white people experiencing prejudice and losing out on opportunities to underqualified people they would have otherwise had.

There have been studies done around diversity, conducted both privately and publicly, which consistently conclude that increased diversity does result in enhanced decision-making, collaboration, and organizational value-add due to the different perspectives having a net positive influence rather than neutral or negative.

Beyond pragmatism, from an idealist perspective aiding in increasing organizational diversity is the morally right thing to do. That doesn't mean hiring underqualified people; it means refusing to fill the position until the right person is found, which is a whole other problem on its own.

Here are some resources to get started: https://www.gsb.stanford.edu/insights/diversity-work-group-p... https://journals.aom.org/doi/abs/10.5465/1556374


Those studies are a pretty weak argument. The first link isn't about the type of diversity grandparent was talking about, but "informational diversity". The paper produces no direct correlation. The pragmatic angle is overblown.

It being the right thing to do is a much stronger argument imo. Which I agree with, but companies generally aren't interested in it, unless they can use it for marketing.


Can you tell me how much diversity juice I need to add to my team to make it perfect? Do I need to hire 1 black person for every white person? Please, provide me the ratio required to avoid marginal returns.

I would need to undergo DNA testing for each person to avoid hiring anyone that is too much of one race. Do I get better returns if I hire someone that has 25% of 4 different races?

Please elaborate on your very empirically verifiable and obviously logically cohesive argument. If you don't understand where my skepticism originates from, please turn your attention to the results of Black Economic Empowerment in South Africa (where racial quotas are enforced to ensure the blackest candidate is hired, not the best candidate).

Look at the performance of the South African sports teams that have been forced to recruit players not based on merit, but on the colour of their skin.


And who the hell do they think they are to know what’s better for the humanity?

Their arrogance is dangerous indeed.


Excellent way to reallocate some of that record breaking revenue they just announced.


Does anyone have any thoughts on Vicarious, the non-deep-learning competitor to OpenAI?


I'm also curious about this, haven't heard anything about them in a while


Hopefully this is good news for Bing.


Well, hopefully if someone creates a benevolent AGI then it should be good news for everyone.


[EDIT]: friendly -> non-friendly oops.

That's what seems so confusing about HN replies here. (Non-friendly) AGI is an extreme existential risk (depending on who you listen to).

I'm perfectly fine with rewarding the org that's responsible for researching friendly AGI to do it _right_ (extremely contingent on that last bit).


Well, I don't think I'd view any AGI that is an existential risk as "friendly".


What if friendliness is not a property of the technology, but of the use? With all the potential concerns of AGI, I think nuclear technology is a good analogue. It has great potential for peaceful use, but proliferation of nuclear weapons is a so-far inseparable problem of the technology. It's also relatively easy to use nuclear technology unsafely compared to safely.

The precedent for general intelligence is not good. The only example we know of (ourselves) is a known existential threat.


the thing is, nobody knows how to do that. it's not a money problem.


OpenAI is a research company - that's what research is, working out how to do things we don't know how to do. Research requires some money so at one level it is a money problem.


but this is alchemy isn't it? there isn't even a theoretical framework from which we can even begin to suggest how to keep any "general intelligence" benign. good old fashioned research notwithstanding, a billion dollars is not about to change this. it reads to more to me like this is an investment in azure (ie microsoft picking up some machine learning expertise to leverage in its future cloud services). that's not a judgement, and i'm sure lots of cool work will still come from this, given the strength of the team and massive backing they have. it just smells funny.


Alchemy wasn't entirely wrong; it is indeed possible to turn lead into gold, it was just beyond the technology of the time: https://www.scientificamerican.com/article/fact-or-fiction-l....


Well, unlike alchemy there are some pretty good examples of intelligent agents around - some even involved in this project!


(no sarcasm)

You know, I can't prove that researchers being funded is the best way of figuring out how to do things, but I have a gut intuition that tells me that.

I'll look into it so that I'm not just blindly suggesting that $$ ==> people ==> research ==> progress.

Thanks for the opportunity to reflect!


it really is an interesting subject to explore within the philosophy of science :)


You need to be able to test your designs and for that you need resources like AI accelerators.


It certainly is. Eventually Bing will automatically infer that it is not a good enough search engine and will destroy itself.


Azure has a supercomputer emulator, and even if OpenAI doesn't get a full $1B in cash but gets to use it as credits on the emulator, that could be huge.


Why is it called an investment? Is OpenAI a corporation that plans to pay out dividends? I thought it's more of a non-profit. This deal looks more like a donation of cloud compute resources. Still a great idea (moves ML research closer to their platform, eating more of Google's lunch), but it's not an investment in OpenAI.


The investment is in OpenAI LP: https://openai.com/blog/openai-lp/!


Because Microsoft has calculated a reasonable ROI to make it worth it.


Yeah but how's the return exactly going to come about (dividends? merger and talent takeover?), that's the question. I can see it returning itself via better ML sales on Azure.


Or management wants to exfiltrate $1B of shareholder dollars.


I think they decided to go ahead and not be a charity in addition to not being open.


Exciting announcement for the OpenAI team.

The wording in the press release reminds me of a question I haven't been able to answer for a while now. Can anyone point me to the moment in time when general purpose artificial intelligence was re-branded to artificial general intelligence? Is GAI that much worse of an acronym than AGI? What's the deal here?


Can we actually expect OpenAI to remain "open" with investments like this getting dumped into the project?

I'm still waiting for the 1.5G GPT-2 set to get released, but they're still going with that "too dangerous for society" BS that they're using to get journalists' attention...


They need to prove it's safe for society before releasing it. If they don't believe it is then not releasing it is of course the smart thing to do. Furthermore, anyone else in the future who creates something they think might be dangerous now has a better argument, because they can point to OpenAI playing it safe and say "I'm just doing the same thing"


I think you're making a silly mistake actually giving this "too dangerous for society" line any validity.

I understand that this tech could be used for nefarious purposes, but this isn't world-ending tech. This is just hard to differentiate from human writing...

The choice to keep their "too powerful model" unreleased is more an attempt to stoke sensationalism out of journalists eager to report on "The AI too dangerous to release" than it is actually an earnest attempt at protecting society.

The dangerous rogue-AI is a Hollywood trope. We don't live in the Ghost In the Shell universe, we live in reality, and a text-generating algorithm isn't particularly dangerous when you think about it.


> I understand that this tech could be used for nefarious purposes, but this isn't world-ending tech. This is just hard to differentiate from human writing...

When, in any OpenAI communication, have they ever implied GPT2 is world ending tech?

> We don't live in the Ghost In the Shell universe, we live in reality, and a text-generating algorithm isn't particularly dangerous when you think about it.

Are you sure it won't be used to automatically post fake news or create artificial group cohesion by bad state actors? We can already do stuff like that, but this allows you to do it faster and cheaper. Don't you think that's at least a little scary?


Again, I already made it clear that this has nefarious purposes.

But there is zero utility in keeping the larger dataset secret. Society is not safer as a result.

The only reason they ever framed it the way they did was to get media attention.

"This AI is so dangerous, the creators are refusing to release it to the public!" is going to get way more clicks than "This AI writes English sentences that almost appear coherent.".

If you want more funding and attention towards your project, you're going to say stuff that gets journalists' attention.


> As the waitress approached the table, Sam Altman held up his phone. That made it easier to see the dollar amount typed into an investment contract he had spent the last 30 days negotiating with Microsoft.

> “$1,000,000,000,” it read.

Wow, Sam Altman sounds like an asshole.


OpenAi to for-profit OpenAI to billion dollar partnership with Microsoft doesn't give me the warm and fuzzy feelings I had when I first heard of OpenAI. I saw it as "we're going to save the world by building an AGI before someone builds SkyNet" today it is "We've got in bed with a company that had one of the most famous anti-trust cases in the United States and one of the most famous anti-competition cases in the EU".

And of course going after high school student Mike Rowe for registering MikeRoweSoft.com (Seriously Microsoft, exactly no one thought he was you).

While Microsoft isn't inherently evil, I mean one could argue that via Windows Microsoft is largely responsible for the widespread adoption of computers, it definitely makes me feel slightly uneasy.

I'd rather see OpenAI continue to be funded by donation, and eventually royalties/licensing of technologies it develops, not partnerships with companies like 'IBM 2: Electric Boogaloo'.

But what do I know, I'm a Morlock not an Eloi.


The whole MikeRoweSoft thing was 15 years ago. I don't think it's fair to judge the company now based on that.


If it was the only instance, I'd say sure, but:

- Lindows

- Microwindows

- wxWindows

- Windows Commander

- Suing Amish Shah

- MikeRoweSoft

Then all of the various anti-trust/anti-competition lawsuits against them by both companies and government entities (Be Inc, Nescape, the EU, Spain, the US Government, Caldera, Opera Software. Also individuals in numerous class actions).

Plenty of reason to feel slightly concerned.


Are any of those recent? I agree that microsoft has been pretty dickish and I'm sure parts of it continue to be (see http://bonkersworld.net/organizational-charts as an example for why), but as a whole I feel that they are better with OSS now than they have every been.


Congrats on the investment, but this release reads like a parody.

I believe that building a beneficial warp-drive engine will be the most important technological development in human history, with the potential to shape the trajectory of humanity. The aliens we're sure to encounter will be capable of mastering more fields than any one human — like a tool which combines the skills of Curie, Turing, and Bach. An alien working on a problem would be able to see connections across disciplines that no human could. But even though I'm known as the warp-drive-guy, I don't actually know how to build a warp drive, so in the meantime I am building increasingly powerful transportation technology in the hope this would lead to a warp drive one day soon, and have decided to focus on bicycles. They're really good bikes, though, and unlike others who make bicycles, I like to consider those I build to merely be pre-warp, a necessary step towards warp technology. So when you buy my bikes [1], you are literally helping me change the trajectory of human history and meet aliens (did I mention Curie, Turing and Bach?)

This is truly a fine specimen of Silicon Valley prose. It's got something for everyone: human history, a wild-eyed dream of a bright future, a connection to the arts, name-dropping, the trajectory of humanity, and, of course, lots of money in cloud services (integrated platform). They even showed some restraint in stopping short of ending all war and curing all disease. "Making the world a better place" is really too mundane.

[1]: The Warp Drive Corporation®'s Pre-Warp Bike™️ is now on sale on Amazon.


(I work at OpenAI.)

It comes down to whether you believe AGI is achievable.

We've talked about why we think it might be: https://medium.com/syncedreview/openai-founder-short-term-ag..., https://www.youtube.com/watch?v=YHCSNsLKHfM

And we certainly have more of a plan for building it than warp drives :).

EDIT: I personally think the case for near-term AGI is strong enough that it'd be hard for me work on any other problem — and find it important to put in place guardrails like https://openai.com/blog/openai-lp/ and https://openai.com/charter/.

Even if AGI turns out to be out of reach, we'll still be creating increasingly powerful AI technologies — which I think pretty clearly have the potential to alter society and require special care and thought.


> It comes down to whether you believe AGI is achievable.

No, it does not. I very much believe AI (or AGI, as you call it) is achievable, but may I remind you that some years after the invention of neural networks, Norbert Wiener, one of the greatest minds of his generations, said that the secret of intelligence would be unlocked within five years, and Alan Turing -- a component of your very own post-pre-AGI era's AGI -- another great believer in AI, scoffed and said that it will take at least five decades. That was seven decades ago, and we are not even close to achieving insect-level intelligence. Maybe we'll achieve AI in ten years and maybe in one hundred, but you don't know which of those is more likely, and you certainly don't know whether any of our pre-AGI technology even gets us on the right path to achieving AGI. There have been other paths towards AI explored in the past that have largely been abandoned.

OpenAI is not actually building AGI. Maybe it hopes that the things it is working on could be the path to an eventual AGI. OpenAI knows this, as does Microsoft.

This does not mean that what OpenAI does is not valuable and possibly useful, but it does make calling it "pre-AGI" pretentious to the level of delusion. Now I know there were (maybe still are) some AI cults around SV (I think a famous one even called themselves "The Rationalists" or something), but what makes for a nerdy, fanciful discussion in some dark but quirky corner of the internet looks jarring in a press release.

> If you believe AGI might be achievable any time soon, it becomes hard to work on any other problem — and it's also very important to put in place guardrails like https://openai.com/blog/openai-lp/ and https://openai.com/charter/

I can't tell if you're serious, but assuming you are, the problem is that there are many other things that if you think chould be achievable any time soon would make it hard to work on any other problem, as well make it important to put guardrails in place. The difference is that no one actually knows how to put guardrails on AGI. We are doing a pretty bad job putting guardrails on the statistical clustering algorithms that some call (pre-AGI?) AI and that we already use.


If AGI is achievable (seems likely given brains are all over the place in nature) and achieving it will have consequences that dwarf everything else then doesn't it make sense to focus on it?

Yes, historically people were way too optimistic and generally went down AI rabbit holes that went nowhere, but two years before the Wright flyer flew, the Wright brothers themselves said it was 50 years out (and others were still publishing articles about human flight being impossible after it was already flying).

People are bad at predictions, in the Wright brothers case since they were the people that ultimately ended up doing it two years later they were likely the best to make the prediction and were still off.

Given that AGI is possible and given the extreme nature of the consequences, doesn't it make sense to work on alignment and safety? Why would it make sense to wait? If you accidentally end up with AGI and haven't figured out how to align its goals then that's it, the game is probably over.

Maybe OpenAI is on the right path, maybe not - but I think you're way too confident to be as sure as you are that they are not.


First of all, I was talking the language, not the work. It makes sense to study AI as it does many other subjects, but we don't know that it will "have consequences that dwarf everything else" because we don't know what it will be able to do and when (we think that it could but so could, say, a supervirus, or climate change, or the return of fascism). People hang all sort of dreams on AI precisely because of that. That cult I mentioned, The Rationalists, basically imagined AI to be a god of sorts, and then you can say "wouldn't you want to build a god?" But we don't know if AI could be a god. Maybe an intelligent being that thinks faster than humans goes crazy? Of course, we don't know that, but my point is that the main reason we think so much of AI is that at this time, we don't know what it is and what it could do.

> Why would it make sense to wait?

Again, that's a separate discussion, but if we don't know what something is or when it could arrive, it may make more sense to think about things we know more about and are already here or known to be imminent. Anyway, anyone is free to work on what they like, but OpenAI does not know that they're "building artificial general intelligence."

> I think you're way too overconfident to be as sure as you are that they're not.

I don't know that they're not, but they don't know that they are, and that means they're not "building AGI."


I can understand your point about the language, but I guess I think it's reasonable to set the goal for what you actually want and work towards it. It may turn out to be unattainable, but I think generally you need to at least set it as the goal. It also seems less clear to me that they are close or far from it (I don't think it's on the same level as warp drive).

I don't know about the god thing you mention and the rationalist stuff I've read hasn't been about that. The main argument as I understand it is:

1. AGI is possible

2. Given AGI is possible if it's created without the ability to align its goals to human goals we will lose control of it.

3. If we lose control of it, it will have unknown outcomes which are more likely to be bad than benign or good.

Therefore we should try and figure out a way to make it safe before AGI exists.

Maybe humans just happen to be an intelligence upper bound and anything operating at a higher level goes crazy? That seems unlikely to me given that humans have a lot of biological constraints (heads have to fit out of birth canals, have to be able to run on energy from food, selective pressure for other things besides just intelligence). You could be right, but I'd bet on the other side.

The last bit is if we can solve this in a way that aligns the goals with human goals (open question since humans themselves are not really aligned) we could solve most problems we need to solve.


I think discussions of AI safety at this stage -- when we're already having problems with what passes for AI these days that we're not handling well at all -- is a bit silly, but I don't have something particularly intelligent to say on the matter, and neither, it seems, does anyone else, except maybe for this article that shows that the AGI paranoia (as opposed to the real threats from "AI" we're already facing, like YouTube's recommendation engine) may be a result of a point of view peculiar to Silicon Valley culture: https://www.buzzfeednews.com/article/tedchiang/the-real-dang...


I agree with you in a way, if AGI ends up being 300yrs out then work on safety now is likely not that important since whatever technology is developed in that time will probably end up being critical to solving the problem.

My main issue personally is that I'm not confident if it's really far out or not and people seem bad at predicting this on both sides. Given that, it probably makes sense to start the work now since goal alignment is a hard problem and it's unknown when it'll become relevant.

I read the BuzzFeed article and I think the main issue with it is he assumes that an AGI will be goal aligned by the nature of being an AGI:

"In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind."

Humans have general preferences and goals built in that have been selected for for thousands of years. An AGI won't have those by default. I think people often think that something intelligent will be like human intelligence, but the entire point of the strawberry example is that an intelligence with different goals that's very good at general problem solving will not have 'insight' that tells it what humans think is good (that's the reason for trying to solve the goal alignment problem - you don't get this for free).

He kind of argues for the importance of AGI goal alignment which he calls 'insight', but doesn't realize he's doing so?

The comparison to Silicon Valley being blinded by the economies of their own behavior is just weak politics that's missing the point.


We don't know that "goal alignment" (to use the techo-cult name) is a hard problem; we don't know that it's an important problem; we don't even know what the problem is. We don't know that intelligence is "general problem solving." In fact, we can be pretty sure it isn't, because humans aren't very good at solving general problems, just at solving human problems.


> Therefore we should try and figure out a way to make it safe before AGI exists.

Makes no sense to me, how would you ever be able to figure out a way to make something safe before it even exists?

Someone who has never built a nuclear reactor most likely could not think of a way to prevent the Chernobyl disaster.

(OK, maybe this is a wrong example as someone who did couldn't do this either, but the point should be clear)


I think the argument is that decision theory and goal alignment can be worked on without knowing all the details about how an AGI will work.

https://intelligence.org/2016/12/28/ai-alignment-why-its-har...


ah yes Yudkowsky, the well established AI researcher & definitely not a crank


Personal attacks are not ok here, regardless of whom you're attacking. Can you please not post like this to HN?

https://news.ycombinator.com/newsguidelines.html


We don't know whether AGI is possible or even exactly what it is. However, if a form of intelligence where adding more hardware adds more capabilities in the fashion of present computing but where the capacities are robust and general purpose like humans rather than fragile and specialized like current software, then we'd have of amazing power - brilliant people can amazing things. A device that's akin to an army of well-organized brilliant people in a box clearly would many capacities. So it's reasonable to say that if that's possible, investing in it may have a huge payoff. (Edit: the "strong" "AGI is possible" would be that AGI is an algorithm that gives a computer human-like generality and robustness while having ordinary soft-like-abilities. There are other ideas of AGI, of course - say, a scheme that would simulate a person on such a high level that the simulated person had no access to the qualities of the software doing the simulation but that's different).

The problem, however, I think another gp's objection. OpenAI isn't really working on AGI, it's making incremental improvements on tech that's still fragile and specialized (maybe even more specialized and fragile), where the only advance of neural nets is that now they can be brute-force programmed.


> However, if a form of intelligence where adding more hardware adds more capabilities in the fashion of present computing but where the capacities are robust and general purpose like humans rather than fragile and specialized like current software, then we'd have of amazing power - brilliant people can amazing things.

That's a very big if... Also, I'd argue that most progress happens not because of some brilliant people, but because of many people working together... Then if your AGI only reaches the level of intelligence of humans and maybe a bit more (what does more even mean in terms of human intelligence? more emphatic? faster calculation ability? more memory? what would the use of this be? all things we can't really oversee), it raises the question whether this would ever be possible in a cost-efficient way (human intelligence seems like it is, in a certain way, "cheap").


>That's a very big if...

Oh, this is indeed a big if. A large, looming aspect of the problem is we don't anything like an exact characterization of "general intelligence" so what we're aiming for is very uncertain. But that uncertainty cuts multiple ways. Perhaps it would take 100K human-years to construct "it" and perhaps just a few key insights could construct "it".

> Also, I'd argue that most progress happens not because of some brilliant people, but because of many people working together...

The nature of a problem generally determines the sort of human-organization one needs to solve a problem. Large engineering problems are often solved by large teams, challenging math problems are generally solved by individuals, working with published results of other individuals. Given we're not certain of the nature of this problem, it's hard to be absolute here. Still, one could be after a few insights. If it's a huge engineering problem, you may have the problem "building an AGI is AGI-complete".

> Then if your AGI only reaches the level of intelligence of humans and maybe a bit more (what does more even mean in terms of human intelligence? more emphatic? faster calculation ability?

I've heard these "we'll get to human-level but it won't be that impressive" kinds of arguments and I find them underwhelming.

"What use would more memory be to an AGI that's 'just' at human level?"

How's this? Studying a hard problem? Fork your brain 100 times, with small variations and different viewpoints, to look at different possibilities, then combine the best solutions. Seems powerful to me. But that's just the most simplistic approach and it seems like an AGI with extra-memory could jump between the unity of an individual and the multiple views of work groups and such is multiple creative ways. The plus humans have a few quantifiable limits - human attention has been very roughly defined as being limited to "seven plus or minus two chunks". Something human-like but able to consider a few more chunks could possibly accomplish incredible things.


> If AGI is achievable

It almost certainly is. Humans make new intelligences all the time.

> and achieving it will have consequences that dwarf everything else

It probably won't, humans make new intelligences all the time. Hanging the technology base for that doesn't have any significant necessary consequences.

A revolution in our ability to understand and control other intelligences might have consequences that dwarf anything else, with or without AGI, but that's a different issue, and moreover one whose shape is basically impossible to even loosely estimate without some more idea of what the actual revolution itself would be.


The difference is in the scale of the intelligence, not just the technology.

It's not so much a new human like intelligence that runs on silicon, it's a general problem solving intelligence that can run a billion times faster than any individual human. This is the part I think you're underestimating.

If you have that without the ability to align its goals to human goals then that's a problem.


> The difference is in the scale of the intelligence, not just the technology.

AGI is inherently no greater in scale than human intelligene, so scale is not a difference with AGI, though it might be with AGsuperI. But that's a different issue than mere AGI, and may be impossible or impractical even if AGI is doable; we have examples of human-level intelligence so we know it is physically acheivable in our universe, we don't have such examples for arbitrarily capable superhuman intelligence.


I think that's somewhat of an arbitrary distinction likely not to exist in practice.

If you have an AGI you can probably scale up its runtime by throwing more hardware at it. Maybe there's some reason that'll prevent this from being true, but I'm not sure that should be considered the default or most likely case.

Biology is limited in ways that AGI would not be due to things like power and headsize constraints (along with all other things that are necessary for living as a biological animal). Human intelligence is more likely to be a local maxima driven by these constraints than the upper bound on all possible intelligence.


> If you have an AGI you can probably scale up its runtime by throwing more hardware at it

Without understanding a lot more than we do about both what intelligence is and how to acheive it, that's rank speculation.

There's not really any good reason to think that AGI would scale particularly more easily than natural intelligence (which, in a sense, you can scale with more hardware: there are certainly senses in which communities are more capable of solving problems than individuals.)

> Biology is limited in ways that AGI would not be due to things like power and headsize constraints

Since AGI will run on physical hardware it will no doubt face constraints based on that hardware. Without knowing a lot more than we do about intelligence and mechanisms for achieving it, the assumption that the only known examples are particularly suboptimal in terms of hardware is rank speculation.

Further, we have no real understanding of how general intelligence scales with any other capacity anyway, or even if there might be some narrow “sweet spot” range in which anything like general intelligence operates, because we don't much understand either general intelligence or it's physical mechanisms.


This is especially true considering we're talking about software vs. hardware (airplane). A few, or even one, brilliant mind(s) could make a break thru in AGI in a matter of months.


> A few, or even one, brilliant mind(s) could make a break thru in AGI in a matter of months.

same goes for warp drives doesn't it?

The point is that people don't see how we build THAT out of these tools we currently have. We only build pastiches of intelligence today and either we have an arrogant view of the level of our own intelligence or we can't make THAT with THIS.

But maybe warp-drives, maybe world-peace too?


Not really - we're not sure if warp drive is possible given the physical constraints of the universe.

AGI is possible because intelligence is possible (and common on earth) in nature.


> AGI is possible because intelligence is possible (and common on earth) in nature.

But you haven't asked if we're capable of building it. While it might be technically possible are we capable of managing its construction?

All I see today is ways of making the process more opaque at the benefit of not having to provide the implementation. How does that technique even start to scale in terms of its construction? I worry about the exponentially increasing length of the "80% done" stage, and that's on the happy path.


Following your reasoning, flying with the speed of light is possible because photons travel with the speed of light. We are not photons though. Is it possible or not to travel with the speed of light with a spaceship?


Warp drives travel faster than the speed of light? That's what I meant by not possible.

Ignoring that if we saw miniature warp drives everywhere around us in nature then yes I would be more confident they were possible.


I see your point I just wanted to point out that there are different challenges for us than for nature. Flying like a bird has dirrent challanges than flying with a Boeing 747, even though these challenges might share a subset of the physics, like the Bernoulli’s principle.


Yep - I think that's fair and a good analogy.

Similarly to how human airplanes don't flap their wings like birds, there will probably be implementation differences that make sense but share the underlying principles. Particularly since the artificial version isn't constrained by things biology needs to handle.


No, way wrong, there would be enormous hardware costs to building a warp drive. There are near zero costs (possibly) of building AGI.


Mythical man month. I feel like you're seriously underestimating how hard this is. Its got to be one of the greatest engineering challenges of our species and to flaunt that its "near zero cost" is offensive.


Not offensive in the least, it's a compliment to what our species has done to date. We are all standing on shoulders and the shoulders have never been higher. Think of the things you can build in a day that were impossible to build 30 years ago. To think that there isn't at least some chance someone will build AGI in the next 30 years is foolish. Again, I'm just saying there is a reasonable chance, like hitting a homerun. It's not likely for any given plate appearance, but given the number of games and players it happens every summer day.


We're making the process more opaque. How can that scale to AGI? We'll be stuck at 80% done for much longer.

I would posit that while its possible, it will take so long on this tech stack that we'll find another in the interim that will produce better results. I'm not convinced this branch is the winner.


Oh, do you mean how can Azure scale to AGI? I have no opinion on Azure, I just meant someone smart will figure it out. There are huge financial incentives to do so, when that happens, we (humans) figure shit out.


> Oh, do you mean how can Azure scale to AGI?

No, not in the slightest. I mean as we progress the dev cycles get harder and slower. Then we need more engineers and the administration of more engineers working together makes everything harder.

Have you ever considered that making a rock think might be one of the greatest engineering projects our species has ever taken on? Sure humans might figure it out but I'm of the belief it will take them a very long time to. In addition, I believe that in that timescale a different tech stack might show more promise. I'm not convinced this technological branch scales all the way to AGI.


Fair point, but I think admin needs have gotten lighter. It took 400k people to get us to the moon. I want to see the results of 400k engineers working independently or in small teams on AGI.


Sounds like a great idea and I'm all for it but I'm talking about the integration of that mess. It will be like trying to hit an ant on the moon with the precision of 18th century artillery.

> Well we've removed its irrational hatred of penguins but now it struggles with the concept of Wednesday again...


Why would AGI dwarf anything?

There are at least 7 billion beings on the planet with AGI already. I think a bigger problem is the general well being of the aforementioned 7 billion entities.


For a simple thought experiment take one human brain architecture (say it can operate at 100 operations per second), scale that up to a billion operations per second. It can think centuries worth of human thinking in a couple of hours.

If you have an AGI that has goals that are not aligned with your interests it'll dwarf everything else because it thinks faster than you (and can therefore act faster than you) in pursuit of its goals.


But human intelligence is weird. It's not clear to me that increasing the speed of my brain would really accomplish much in my day-to-day life. A lot of the value that I add happens during these 'Eureka' moments, often triggered when I am working on a different problem, taking a break, or after a good nights sleep. Adding more processing speed may or may not make that process more scalable.

And another thing to consider, is that in the real world success is not easy to define and it is only loosely correlated with intelligence. We have 7 billion people, each attempting random little variations on 'succeeding at life'. And the 'winners' generally require that some of the 7 billion people agree to 'reward' them (i.e. by giving them money). My last 3 purchases were watermelon seeds for my garden, a pair of jeans, and a dinner at a Vietnamese restaurant. It's not clear to me how AI would take over any of those transactions. Maybe make the jean manufacturing more efficient, but the price I paid was already pretty low.


Sure it happens in Eureka moments which for you are when you take a break that might be a few hours, but if you're running a billion times faster then a few hours turns into a billionth of that time. That's what I'm trying to get at as an example - even assuming the exact same architecture otherwise.

For the real world success part that's where goal alignment comes in. If we're going to solve things like dealing with the sun burning out, becoming an interplanetary species, or death then having an AGI that can work on these problems with us (or as part of us if Neuralink can succeed on what they want to do) will be a big deal.

It sounds crazy, but I think success here is a lot bigger than automating what clothes you were going to buy. Incentive based systems like capitalism work pretty well, but not being able to coordinate effectively at scale is a major source of current human problems, theoretically a goal aligned AGI could do that, or at least help us do it.


AGI would dwarf those 7 billion people because it would concentrate tremendous power into the hands of a very few.

It's the dream of being one of the few who gets to control and direct that concentrated power that fuels these dreams, which is why it's imperative that they dress it up in the language of benefiting society.

The essence of the ethical problem with AI is that there is no person or small group of people who can be trusted to use such power without creating a real dystopia for the rest of us.


I think this is a pretty big misunderstanding of the AGI issue.

Nobody is going to control the hyper-intelligent AGI if its goals are not aligned with human goals more generally. That's the nature of something being a lot smarter than you with its own goals.


Wait, is the claim that intelligence alone determines who is in control? I've certainly seen lots of examples where people were controlled by others, even though they were orders of magnitude more intelligent than those who had power.

Are the people who are trying to make AI a reality planning to give away the ability to unplug the machine it will inevitably depend upon, or give it the ability to control nuclear weapons so that it can wipe out humanity before that happens, like a bad movie script? It really does seem that ridiculous to aver that the creators of AI, if it ever comes to be, won't retain ultimate control over what it can do in the physical world.

Whatever its goals end up being, they will be aligned with Microsoft's goals if OpenAI gets there first. That's what the billion dollars is meant to ensure.


I don’t think humans are orders of magnitude apart.

The difference between the world’s smartest human and its dumbest human is tiny relative to the possible spectrum of intelligence.

Do you see the smartest chimps tricking or controlling humans?

If you wanted to trick a chimp to do something you wanted it to do, do you think it could stop you? And we’re probably a lot closer to chimp intelligence than an AGI would be to us.


> If AGI is achievable (seems likely given brains are all over the place in nature)

I don't see how that conclusion follows the antecedent.


Brains aren't magical, if the laws of nature allow for them to exist in nature and we see generalized intelligence develop and get selected for repeatedly then that suggests it can be done - it's just a matter of knowing how.


It might also be a possibility that this isn't possible to be replicated in an artificial way (that is, human beings maybe aren't smart enough to ever understand their own intelligence, even using all tools to their disposition)

In a certain way, brains are so complicated that (at least for the moment) they seem quite magical to us


Things often seem magical until they're understood.

As far as humans not being able to ever understand it, I guess that could be true but I wouldn't bet on it.


Our struggle to understand the brain suggests that "just a matter of knowing how" might take a while.


Pron, I fully support your take here. Most AGI campaigners here clearly think that we must have already figured out a lot about how consciousness works. But is there any evidence to back that up? No, because we _haven't_ created consciousness. The most we've done is manipulate _existing_ consciousness. Sure, we can point to similarities between deep learning and the brain, and these avenues are interesting and, I think, worthwhile to explore. But false starts happen often in science (e.g. bloodletting / astrology / pick your own) and seem to occur at intersections where concrete evidence of results is inaccessible. No one can say with certainty we aren't in the middle of one now.

Like pron, I don't mean to dismiss the work any AI researcher is doing, but the industry has growing money and power and I just think people should be careful with statements like the one pointed out already and so often encountered: "if you believe AGI might be achievable any time soon, it becomes hard to work on any other problem."


Consciousness may not have anything to do with AGI. Besides, we haven't as a species defined consciousness in a consistent and coherent way. It may be an illusion or a word game. AGI may end up being more like evolution, a non-conscious self optimizing process. Everyone is talking about AGI but we can't even define what we mean by any of these terms, so to put limits on how near or far away major discoveries might be is pointless.


> It may be an illusion or a word game.

If consciousness is an illusion, what is experiencing the illusion? What makes an experience of consciousness an illusion rather than actual?

(Don't quote Dennett in response, I'm curious to see a straightforward reply to this that makes sense.)


True. I used “consciousness” haphazardly.

Not super related, but AGI enthusiasts sometimes remind me of this: https://youtu.be/bS5P_LAqiVg?t=9m50s


>we are not even close to achieving insect-level intelligence

Then again I don't know many insects that drive cars, beat the champions at chess and go or similar.


I say insect intelligence is good enough to at least drive car. Bee, for example, is pretty damn amazing at flying & navigation. I mean flying and avoiding obstacles through miles of forest to find food is no easy task.


The difference is that insects can perform a wide variety of tasks needed for their survival, but all "AI" created by humans so far can only perform a single task each.


I dont know ___a single___ software that can do these either. This is exactly one challenge for AGI, to know which pattern recognition part to pull up in which situation. An insect can make decisions when to fly, crawl or procreate. I do not think that we have something similar in software just yet.


Also Atari! Don't forget the mighty achievement of Atari somewhat mastery - the key indicator of intelligence in our time.


Not taking a side on the over/under for AGI, but perhaps you are also acquainted with this little gem:

https://pdfs.semanticscholar.org/38e6/1d9a65aa483ad0fb4a219f...

Shannon, Minsky, and McCarthy!


It's an interesting dream team. But AFAICT this is only a proposal. Did the proposed series of studies take place? If so what was the outcome?



I wonder if they were the dream team back then or just promising young researchers.

I think the interesting take away is that they (seem to have) expected to solve the major problems of AI (language, common sense etc) over a summer with a small stipend.


So you’re saying.. it comes down to whether you believe AGI is achievable within our lifetime? Or even worth contributing to at all even. I think the parent has made their position pretty clear through their employment choices, that’s a level of skin in the game that naysayers don’t really have.


Perhaps I haven't been clear. I have no issue with the research OpenAI is performing, nor with anyone's beliefs in AI's imminence or their personal role in bringing it about. However, no one knows whether what they're doing is even on the right path towards AI, and certainly not when it will be achieved, plus the topic has been subject to overoptimism for decades now, so I do take issue with publicly calling what you do "working on AGI" or "pre-AGI" even though you have no idea whether that is what you're doing. Hopes and aspirations are good, but at this stage they fall far short of the level required for such public proclamations. My issue is with the language, not with the work.


I think you're issue with the language is not shared by most people. In research we rarely know beforehand what research is on the right or wrong path. But we are comfortable with someone saying they are researching something even if they don't know beforehand whether the research will be useful or a wild goose chase. For example most people's first though to hearing "I'm researching ways to treat Alzheimer's" isn't "Only if it passes phase 3 trials!".


Yeah, in this release they're not saying they're doing research towards AI, or even that they're researching AI. They're saying that they're "building artificial general intelligence" and developing a platform that "will scale to AGI." (emphasis mine) They're also calling what they're actually building "pre-AGI."


> We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI.

This sentence might by itself imply they are farther along than they are, but in the context of the whole article I never got the impression they were close to actually building an AGI.

> The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

This read pretty straightforwardly to me. Pre-AGI seems like a shorthand for useful technologies like GPT-2.

Reading the article I never got the impression they'd solved AGI, or were even close. The context of the article is a partnership announcement not a breakthrough. I could see how a few people who are very unsophisticated might get a little confused as to how far along they are. But I assumed they were writing for people who had heard of OpenAI which pretty much eliminates anyone this unsophisticated.


They don't know what connection, if any, what they're doing has with AGI. For all we know right now, some botanist researching the reproductive system of ferns is as likely to bring about a breakthrough in AI as their research is. To me this feels like peak-Silicon Valley, the moment they've completely lost touch with reality.

People may also not be confused if Ben and Jerry's start an ice cream ad with mentions of AGI and the change of human trajectory and Marie Curie, and name it Pre-AGI Rum Raisin, but that doesn't mean the text isn't a beautiful and amusing example of contemporary Silicon Valley self-importance and delusion, and reads like a parody that makes the characters in HBO's Silicon Valley sound grounded and humble. Especially the "pre-AGI" bit, which I'm now stealing and will be using at every opportunity. Maybe it's just me, but I think it is quite hilarious when a company whose actual connection with AGI is that, like many others, they dream about it and wish they could one day invent it, call their work "pre-AGI." Ironic, considering they're writing this pre-apocalypse.


Although, perhaps you would agree that someone saying "my work on Alzheimer's might help your friend" would be behaving in a cruel and unprofessional way unless the treatment was indeed in human trials?


I read the article more as "maybe my work will one day be able to cure Alzheimer's and help people like your friend".

What in the article gave you the impression they had made a large breakthrough or were close to an AGI?


Do you hate all marketing or just around AI in particular? Would you bat an eye at MS investing $1B into a project + ads about say a new GPU architecture promising how "games will never be the same" because the new hardware (or even Cloud Integration) lets developers efficiently try and satisfy the rendering equation with ray tracing?

FWIW I thought you were clear, but there are only so many middlebrow dismissals one can make towards AI or AGI efforts and I think I've seen them all plus the low-value threads they generate. (I've made some too, and suspect we might get brain emulations before AGI, but I try to avoid the impulse and in any case it doesn't stop me from hoping (and minor contributing) for the research on the article's load-bearing word "beneficial" to precede any realistic efforts of building the actual thing. At least the OpenAI guys aren't entirely ignorant of the importance of the "beneficial" problem.)


I don't hate this copy at all; I absolutely love it! I think it is a beautiful specimen of the early-21st c. Silicon Valley ethos, and it made me laugh. Pre-AGI is my new meme, and that means something coming from a pre-Nobel Prize laureate.

What I'm interested in is how many dismissals of AI, most end up justified, can the field take before considering toning down the prose a bit, especially considering that the dismissals are a result of setting unrealistic expectations in the first place.


> the topic has been subject to overoptimism for decades now,

But so has every other big idea that went on to become reality, like planes (da Vinci was drawing designs for planes over 400 years before the first working ones).

> no one knows whether what they're doing is even on the right path towards AI

This is completely wrong. That would be like saying "no one knows if working on a wing is on the right path to flight".

Look at the way deep learning works. Look at the way the brain works. They share immense similarities. Some people say "neural nets" aren't like the brain, but that's not true--they are just trying to not over-exaggerate the differences which laymen commonly do. They are very similar.


> But so has every other big idea that went on to become reality, like planes (da Vinci was drawing designs for planes over 400 years before the first working ones).

And so has every other big idea that didn't become reality, and that was the majority. Again, I have no problem with AI research whatsoever, but the prose was still eyebrow-raising considering the actual state of affairs.

> They are very similar.

They are not. The main role of NNs is learning, which they still mostly do pretty much with backpropagation gradient-descent (+ heuristics). The brain does not learn with backpropagation.


> And so has every other big idea that didn't become reality, and that was the majority.

This is a really good point. Would be a fascinating read if someone were to collect all those examples and explore that a bit.


The most famous is the Philosopher's Stone: a substance that can convert base metals into gold.

But in itself, that was not the point. It would also transform the owner or user -- it was a hermetic symbol, a mechanical means to "pierce the veil" and to see the deep mystical and magical truths, the Real Reality. It was immanent, a thing in the world, that enabled the transcendent, to go beyond, above, outside of the world. Its discovery would have been the single most important moment in the history of the world, the moment in which humans had a reliable road to divinity.

Hmm. Sounds familiar, doesn't it?

But out of alchemy came modern chemistry, and also some parts of the scientific method. After all, as some smart people worked out, you could systematically try all the permutations of materials that your reading had suggested as possibilities. That meant measuring, weighing, mixing properly, keeping detailed notes. Fundamental lab work is the unglamorous slab of concrete beneath the shining houses of the physical sciences. There were waves of hysteria and hype, but after each, something useful would be left behind, minus the sheen of unlimited dreams.

Hmm. Sounds familiar, doesn't it?

These days it is possible for a device to transmute base metals into gold. But the operators have not, so far as I can deduce, ascended to any higher planes of existence. They have eschewed the ethereal and remained reliably corporeal.


> Hmm. Sounds familiar, doesn't it?

I'm not sure the reference you are making. Red pill/blue pill? I wasn't aware of the symbolism in the Philosopher's Stone.

> But the operators have not, so far as I can deduce, ascended to any higher planes of existence.

I guess I'm unfamiliar with this non-literal aspect to the philosopher's stone.

I'm missing the allusions you are making. To ascend to higher planes of existence, no need for AGI, some acid will do.


Do you have a reference on the brain not learning with backpropagation? I'd like to learn more.


https://arxiv.org/abs/1502.04156

> Towards Biologically Plausible Deep Learning

> Neuroscientists have long criticised deep learning algorithms as incompatible with current knowledge of neurobiology. We explore more biologically plausible versions of deep representation learning, focusing here mostly on unsupervised learning but developing a learning mechanism that could account for supervised, unsupervised and reinforcement learning. The starting point is that the basic learning rule believed to govern synaptic weight updates (Spike-Timing-Dependent Plasticity) arises out of a simple update rule that makes a lot of sense from a machine learning point of view and can be interpreted as gradient descent on some objective function so long as the neuronal dynamics push firing rates towards better values of the objective function (be it supervised, unsupervised, or reward-driven). The second main idea is that this corresponds to a form of the variational EM algorithm, i.e., with approximate rather than exact posteriors, implemented by neural dynamics. Another contribution of this paper is that the gradients required for updating the hidden states in the above variational interpretation can be estimated using an approximation that only requires propagating activations forward and backward, with pairs of layers learning to form a denoising auto-encoder. Finally, we extend the theory about the probabilistic interpretation of auto-encoders to justify improved sampling schemes based on the generative interpretation of denoising auto-encoders, and we validate all these ideas on generative learning tasks.


There are plenty of people who worked on AI (as graduate students, as ML researchers at hot startups for self-driving cars, as pure researchers or supporting engineers at Google or Facebook, and many other places), and then left because once they saw how limited the research was they lost hope it was going to happen before they died.

Also while I fully understand and appreciate the necessity of OpenAI abandoning the 'open' part, it says a lot about who is going to benefit from this technology when you have investors who want to make money. It's just ironically poetic in this instance.


I honestly don't see any possibility of truly "wide-spread benefits for humanity" if AGI is achieved anytime soon. Current state of humanity when it comes to how we treat each other and anything akin to species-level awareness and collaboration is only barely better in recent history than the dark ages. If a group of people gets access to an AGI, I think it will very quickly result in a bit wider group than that having their lives prolonged, no disease, no lack of resources and practically infinite wealth, and everyone eventually being either rid off, or allowed to live in far-away slums and left to die off.


> I think the parent has made their position pretty clear through their employment choices,

Their employer has just received a 1 billion dollar cash investment to futz around with computers. I don't think the employment choice is some sort of personal sacrifice here.

The level of skin here is a cushy guaranteed job for years to come until the next AI-winter hits, likely set into motion by such very claims of "AGI" being near or feasible.

"AGI" is good marketing for getting money to research real AI, ostensibly on the path to "AGI", but if one drives it too far, one might up retarding the whole field as the hype winds down (again).


That's true but at the same time I'm not trying to be in AI because it's such a specialist role which may or may not be a fad which the employment market overfills with new grads with a new degree in "ML", bricking salaries. But I think the internet will be here for awhile.


Agreed. By the way, does anyone want to try my new VR tech? It will change everything forever!


>OpenAI is not actually building AGI. Maybe it hopes that the things it is working on could be the path to an eventual AGI. OpenAI knows this, as does Microsoft.

Yes they are? They are making breakthroughs/incremental advances that required to get there, and building components along the way. It would be like you saying well Henry Ford isn't building a vehicle, he's just building a wheel, and a tire, and an engine, etc...


> They are making breakthroughs/incremental advances that required to get there

They don't know that. We have no idea what's required to achieve AI. Now I don't know how long before Ford actually built cars he started saying he's building cars, but if Wikipedia is to be believed, it could not have been more than three-four years. Also, when he started building cars, he pretty much knew what's required to build them. This is not the case for AI.


> We have no idea what's required to achieve AI

Yes, we do. Lots of data, lots of training, better algorithms, more understanding of the brain...At this point we still need 10x+ improvements in a lot of areas, but it's pretty clear what we need to do.

If you can process around 100 petabytes per second (1 Google Index of data per second), you could fully simulate a human being, including their brain. We're still a little bit from that, but it's pretty clear we'll get there (barring usual disclaimers about an asteroid, alien invasion, etc).

Source: I work in medical research, doing deep learning, and do research on programming languages and deep learning for program synthesis.


> Yes, we do. Lots of data, lots of training, better algorithms, more understanding of the brain...At this point we still need 10x+ improvements in a lot of areas, but it's pretty clear what we need to do.

This is absurd. How much data? How much training? What kind of training? How much better do the algorithms need to be? How do you define better? Also we literally don't even know how our brains work, so we don't know how "actual" intelligence works, but you're saying we have a clear road map for simulating it?

Your entire argument distill down to "we just need to do the same things, but better." And even that statement might be wrong! What if standard silicon is fundamentally unsuited for AGI, and we need to overhaul our computing platforms to use more analog electronics like memristors? What if everything we think we know about AI algorithms ends up being a dead end and we've already achieved the asymptote?

I'm not saying AI research is bad. I'm saying it is absolutely unknown by ANYONE what it will take to achieve AI. That's why it's pure research instead of engineering.


> Yes, we do. Lots of data, lots of training, better algorithms, more understanding of the brain..

So to build AI all that remains is to understand how it could work.

> but it's pretty clear what we need to do

It isn't (unless by "clear" you mean as clear as in your statement above). I've been following some of the more theoretical papers in the field, and we're barely even at the theory forming stage.

> but it's pretty clear we'll get there.

First of all, I don't doubt we'll get there eventually. Second, I'm not sure simulating a human entirely falls under the category of "artificial". After all, to be useful such a mechanism would need to outperform humans in some way, and we don't even know whether that's possible even in principle using the same mechanism as the brain's.


> I've been following some of the more theoretical papers in the field, and we're barely even at the theory forming stage.

I read those papers too. And I write code and train models day in and day out. I could get very specific on what needs to be done, but that's what we do at our job. If you're curious, I'd say join the field.

I agree with you in that I don't think for a second anyone can make an accurate prediction of when we will AGI, but I have no doubt that it will be relatively soon, and that OpenAI will likely be one of the leaders, if not the leaders in creating it.


I’ve been doing research in DL field for the last 6 years (just presented my last paper at IJCNN last week), and I can say with confidence we have no clue how to get to AGI. We don’t even know how DL works on the fundamental level. More importantly, we don’t know how the brain works. So I agree with pron that your “relatively soon” is just as likely to be 10 as 100 years from now.


I could explain it to you in an afternoon. But I’m not going to do it online, because then you have a thousand people call you “delusional”, because you simply are stating that exponential processes are going to continue. For some reason, many people who think themselves rational and scientific believe that things that have been going exponentially are suddenly going to go linear. To me, that is delusional.


> because you simply are stating that exponential processes are going to continue.

Exponential process continuing doesn't imply "we're going to get there soon" in any way, shape or form. The desired goal can still be arbitrarily far.


Explain what?


How to get to AGI.


If you know how to get there why don’t you build it?


1) Indeed we are doing a few of the things on the checklist to build AGI.

2) Our focus is on helping improve medical and clinical science and cancer tooling first.

3) If we needed AGI to cure cancer, perhaps we'd be working directly on AGI. If anyone thinks this is the case, please let me know, as at the moment I don't think it is.


You don’t think AGI would dramatically speed up cancer research (or any other research)?


Of course I do, but my back of the envelope guess is there's a 30% shot we can cure cancer in 15 years without AGI, and a 1% shot we can reach AGI in 15 years. I think AGI is cool but I'm much more concerned about helping people with cancer.


These are all assumptions, and there is a lot of disagreement in the academic community around it.

Humans don't seem to need anywhere near the same level of data or training that our current models need. That alone is a sign that deep learning may not be enough. The focus on deep learning research has a lot of useful benefits, so I'm not discounting that, but there are a decent amount of smart people who don't believe it's going to lead us to AGI.

Source: I also work in medical research, and am doing deep learning- and I've worked for a company that's focused on AGI, and I've worked with several of the OpenAI researchers.


> Humans don't seem to need anywhere near the same level of data or training that our current models need.

I find this to be a common misunderstanding. If I show you one Stirch Wrench, and you've never seen one before, you learn instantly and perhaps for the rest of your life you'll know what a Strich Wrench is. The problem is I didn't show you 1 example. You saw perhaps millions of examples (your conscious process filters those out, but in reality think of the slight shaking of your head, the constant pulsing of the light sources around you, etc, to be augmenting that 1 image with many examples). I think humans are indeed training on millions of examples, it's just we are not noticing that.

> That alone is a sign that deep learning may not be enough.

I 100% agree with that. It's going to take improvements in lots of areas, many unexpected, but I think the deep learning approach is the "wings" that will be near the core.


I think what you're terming a misunderstanding is actually fairly well known, but doesn't account for the magnitude of the sitution.

Here's a great article about a paper showing that humans prior knowledge does help with learning new tasks- https://www.technologyreview.com/s/610434/why-humans-learn-f...

However, that doesn't account for how quickly toddlers learn a variety of things with a small amount of information. Even more important, you can also just look at things like AlphaGo- they train on more examples than could be accumulated in a hundred human lifetimes.

For these reasons I don't believe "more data" and "more training" is the answer. We're going to need to do a lot more work figuring out how humans manage recall, how we link together all the data, and I would be surprised if this didn't involve finding out that our brain processes things in ways that are far different than our current deep neural nets. I don't believe incrementalism is going to get us to AGI.


I’m always puzzled at this idea that humans, at whatever age, are learning things with a small amount of information. The full sensory bandwidth of a baby from pregnancy to toddlerhood seems huge to me. I suspect that helps, as does the millions of years it took to create the hardware it all runs on.


I don’t believe incrementalism will get us their either. We need many more 10x+ advances. But I think it’s relatively clear where those advances need to be. I think simply by making 10x advances in maybe 100 or 1k domains we’ll get there. Neuralink for example, just announced many 10x+ advances, such as the number of electrodes you can put in the brain. Our lab is working on a number of things that will be also 10x advances in various sub domains.

Lots of advances in many fields will lead to something greater than the sum of their parts.

Edit: p.s. I like your comment about toddlers. As a first time father of a 6 month old, its been very intellectually interesting watching her learn, in addition to just being the greatest bundle of joy ever :)


I think that the lack of a hundred or a thousand 10x advances (you may be more pessimistic than me) does not merit calling your work pre-AGI.


> doing deep learning, and do research on programming languages and deep learning for program synthesis.

That sounds fascinating. Could you link to some relevant stuff about languages and deep learning for program synthesis? I'd love to read more about this.


Sure! Shoot me an email to remind me


To my knowledge Henry Ford didn’t start off selling wagon wheels.

Also when he started there were working automobiles already.

The fact that no one knows how to make an AGI, doesn’t make it a bad goal. But OP is right, if you think you know the timeframe it will arrive in, you have no idea what kind of problem you’re dealing with.


Allow me to repost Altman's wager:

- If OpenAI does not achieve AGI, and you invested in it, you lose some finite money (or not, depending on the value of their other R&D)

- If OpenAI does not achieve AGI, and you did not invest in it, you saved some finite money, which you could invest elsewhere for finite returns

- If OpenAI achieves AGI and you invested in it, you get infinite returns, because AGI will capture all economic value

- If OpenAI achieves AGI and you did not invest in it, you get negative infinite returns, because all other economic value is obliterated by AGI

Therefore, one must invest (or in this case, "work on the most important problem of our time").

(And yes, this tongue-in-cheek.)


This does not presuppose any kind of precise definition of infinity.


I think infinity in the gp comment could well be defined as, "the new AGI regime will or won't obliterate me." The gp comment is just Pascal's Wager, with AGI taking the part of God, and "infinite returns" taking the part of an eternity in Heaven or Hell.


That was seven decades ago, and we are not even close to achieving insect-level intelligence.

[citation needed]

I guess this depends on what "close" is. For something as blue sky as AGI, let me propose the following definition of "close:" X is "close" if there's over a 50% chance of it being achievable in the next 10 years if someone gave $10 billion 2019 US dollars to do it.

I think this is a fair metric for "close" for a blue-sky goal which has the potential to completely change human history and society. It's comparable to landing someone on the moon, for instance. Now, let's pick the insect with the simplest behavior. Fleas and ticks are pretty stupid, as far as insects go. I think we're "close" to simulating that level of behavior. Of course, that's straw-manning, not steel-manning. If we pick the smartest insects, like jumping spiders and Tarantula Hawks, we're arguably not "close" by the above metric. Simulating a more capable insect brain of a million neurons is not an insignificant cost, and training one through simulation would multiply the computing requirements many times that. However, there are evidently systems which are capable of simulating 100 times that number of neurons:

https://www.scientificamerican.com/article/a-new-supercomput...

So I would say, we're arguably not "close." However, we're not that far off from "close."


For a comment this precise I'm surprised you've mistaken spiders for insects :) Anyway, I think that "if you gave us $10B then in ten years we have even odds of producing something as smart as a jumping spider" does make for less inspirational copy than "[we're] building artificial general intelligence with widely distributed economic benefits."


For a comment this precise I'm surprised you've mistaken spiders for insects :)

True. They're fellow arthropods, and have similar levels of nervous complexity. (BTW, are you by any chance confusing Tarantula Hawks for spiders?)

does make for less inspirational copy

The levels of inspiration in the copy and generalizing across the phylum Arthropoda aside, are you effectively conceding that we're close to AGI at insect levels?


> are you effectively conceding that we're close to AGI at insect levels?

By "we are not even close to achieving insect-level intelligence" I think I meant that what we have now is not close in intelligence (whatever that means) to insects. I don't know if we have a 50% of getting there in a decade, but I certainly wouldn't conclusively say that "we are not even close" to that. I mostly regret having chosen bikes rather than electric scooters for my original comment. I think that sounds funnier.


By "we are not even close to achieving insect-level intelligence" I think I meant that what we have now is not close in intelligence (whatever that means) to insects.

Some insects are pretty stupid! Fleas and ticks have a good and highly adapted repertoire of behaviors, but for the most part, as far as we know, most individual behaviors are fairly simple.

I mostly regret having chosen bikes rather than electric scooters for my original comment.

Here's where your analogy falls down. We don't even have working examples of a complete warp drive, or anything like it. On the other hand, we don't have any commercial airliner sized beamed-power electric jets, but we have smaller conceptual models of the involved devices which demonstrate the principles. This is why I'd say we're "close to close" to insect level intelligence. 10 years and $10B would get us to the flea level. I think that's "close" like airliner sized beamed-power electric jets is close.


I think my point was lost because it's my pre-Primetime Emmy material.


I think your point was lost because there's some scaling problems in the mental models used to formulate it.


> AI (or AGI, as you call it)

AI and AGI may have meant the same thing a long time ago, but the term "AI" has been used almost ubiquitously to represent things that are not AGI for so long now, that I don't think the terms are interchangeable any longer.


>we are not even close to achieving insect-level intelligence.

Is this true? Is there an insect turing test?


Compare the most advanced self driving car to the simplest insect and you should immediately realize how far we are from insect level AI.


I don't see how it could be. What can insect brains do that we couldn't get AI to do?


Pretty much everything that insects do is beyond our current AI and engineering tech. Ignoring the "engineering" feats that biological beings perform such as replication, respiration and turning other plants and creatures into their own energy source, their behaviour is very sophisticated. Imagine programming a drone to perform the work of a foraging Bee, using a computer that fits into a brain the size of a grain of rice. It can manage advanced flight manoeuvres, navigation to and from the hive, finding pollen, harvesting it, dodging predators and no doubt a dozen skills I can't even imagine.


Aside from the miniaturization, I'd be surprised if we couldn't make an exact simulacrum of a honey bee in software today, to the limits of our understanding of honey bees.

As with AI... a system can be simulated to a given level of fidelity without necessarily simulating the entire original underlying system.


This doesn't necessarily say much about the state of our AI expertise, but our understanding of honey bees is an insufficient basis for the construction of anything that would survive of be an effective member of a hive. Just a week or two ago on HN there was an article about how scientist finally have just now acquired a reasonably complete understanding of the waggling language that they use to communicate with one another. (https://www.youtube.com/watch?v=-7ijI-g4jHg)

Perhaps more relevantly, an automaton that could observe such a waggle dance using computer vision and then navigate to the food source described by the waggle seems to me to strain the bounds of our current capabilities, or maybe even to surpass them by an order of magnitude.


Bees also have sophisticated communication skills to tell other bees where to find food.


In terms of intelligence, there isn’t. What prevents us from actually building a uber-insect is miniaturization, self sustaining energy production of some kind and reproduction in an artificially built system. I guess it would be possible to demonstrate insect level intelligence by actually replacing an insect brain with an artificial one.


Your guess would be wrong. Our actual level of AGI development is maybe more on the level of a flatworm. Complex, social insects like bees are still far beyond our ability to simulate.


Controllably fly in strong wind using very primitive sensors.


What can a modern F1 tire do that we couldn't do with a 500BC wooden wheel ?


How do you know we are not closer to AGI? Because we don't how to create AGI we cannot know whether we are closer or not. We can say artificial neural networks are not the way to go because they are not like real neurons, but we can say very little about neurons that could be possible artificial neural networks are actually the way to achieve intelligence. The topic is so complex and we know very little that any strong claim is very likely to be wrong


I believe your opinion aligns pretty well with a crescent number of researchers that see in investments like this exactly the scenario for the warp-drive you described.

In your opinion what should computer scientists be focusing on in order to achieve more advanced AI systems? I'm thinking things such as reasoning, causality, embodied cognition, goal creation, etc.

And this is without even delving into the ethics aspects of (some instances of) AI research.


I can't find any results for either Norbert Weiner or Alan Turing saying those things - do you have a source?


That's a good question. About a year and a half ago I compiled a large anthology of the history of logic and computation, and I remembered coming across that during my research. What I've been able to find now is the following section from Hodge's Turing biography (p 507-8 in the edition I have):

> Wiener regarded Alan as a cybernetician, and indeed ‘cybernetics’ came close to giving a name to the range of concerns that had long gripped him, which the war had given him an opportunity to develop, and which did not fit into any existing academic category. In spring 1947... Wiener had been able to ‘talk over the fundamental ideas of cybernetics with Mr Turing,’ as he explained in the introduction to his book... Wiener had an empire-building tendency which rendered almost every department of human endeavour into a branch of cybernetics... Wiener delivered with awesome solemnity some pretty transient suggestions, to the general effect that solutions to fundamental problems in psychology lay just around the corner, rather than putting them at least fifty years in the future. Thus in Cybernetics it was seriously suggested that McCulloch and Pitts had solved the problem of how the brain performed visual pattern recognition. The cybernetic movement was rather liable to such over-optimistic stabs in the dark.

So if this passage is indeed the source of my recollection, while very poor and perhaps exaggerated, I think it's pretty true to the spirit...


Insects are super predictable. They almost always act identically in response to the same stimuli, which is why cockroaches will always eat poisoned bait if it's within a foot of them, no matter the circumstance, while rats are wiley.


guardrails in terms of policy, if not technical details, are still valuable.

the thing is, there are actually lots of reasons to think AGI cannot be constrained in this way. open AI researchers know this.

so that means, the promise and the charter are irrelevant. open ai will never release a general AI.

but in the meantime, deep learning is still reaping. every day it's being applied to something new and solving real, tangible problems. there's money to be made here, and that is what open AI seems to really be doing. being philosophical and "on top" of the futuristic moral dilemmas, whatever, is just marketing? and in the unlikely event that an AGI is created that can be tamed, great for open ai! if an AGI is created that cannot be tamed, what then? if it's really worth a trillion dollars, is it really just buried, or will the charter simply be rewritten?

you know, this reminds me a lot of all the great physicists working on the atom bomb, thinking it was never going to be used.


> and we are not even close to achieving insect-level intelligence.

Aren't we close to this? Most insects only have a few million neurons in their central nervous system, so we can model their intelligence in real time at least. Maybe we still lack the tools for training such networks into useful configurations?


Yes, if you assume the technical model of each neuron only having one scalar output bfloat16, then we could simulate insect brains right now. But the technical neuron model of sum of inputs plus sigmoid activation function is only an approximation.

Neurons communicate with each other with a multitude of neurotransmitters and receptors [1]. As a cell, each neuron is a complex organism of its own that undergoes transcriptomic and metabolic changes. We aren't even close to simulating all protein interactions in a single cell yet, let alone in millions of them.

Of course you could say that full protein simulation of an entire brain is not neccessary if we can build an accurate enough technical model of a single neuron. In fact, already now we have to apply a model of how we believe proteins behave as "properly" simulating interactions of two proteins (or one with itself) with lattice QCD approaches is beyond our computational capabilities. For protein interaction we have pretty good models already. But finding a model of all types of neurons in insect brains is right now an open, unsolved challenge.

[1] https://en.wikipedia.org/wiki/Neurotransmitter#List_of_neuro...


> we could simulate insect brains right now

AFAICT this suggests that we have the computational power but wouldn't it also be a significant challenge to create an accurate model for the brain simulation?


Lattice QCD is used for sub-nuclear simulations, proteins are studied with much more tractable methods based on regular quantum mechanics.


Yes, that's my point: you don't need to simulate a protein with that tool because we have good enough models of higher level structures like atoms. And similarly we might find models for neurons that allow us to avoid full emulation all protein interactions. We figured out how atoms work before we figured out how nuclei work, but with neurons it's the opposite: we know/can figure out how the the parts (proteins) of the machine work but not how the entire machine works.


Once we know how a neuron works, ask again. I am not sure how this detail keeps getting glossed over.


You don't need planes to flap the wings in order to fly.


But you do need to understand that they generate lift, and be able to mathematically describe something that generates lift. The Wright brothers wrote to the Smithsonian in 1899 and got back, among other things, workable equations for lift and drag.

I think people think back propagation is the metaphorical lift equation here and we just need a “manufacturing” advancement (ie, more compute and techniques for using it). We’re close to that (I personally feel like with poor evidence) but definitely not there yet (as evidenced by nobody publishing this). We cannot describe what is happening with modern architectures as fully as a lift equation predicts fixed wing flight, and so it is largely an intuition + trial and error, which is a slow unreliable way to make progress.


My point is that while the brain and neurons are very complex and inherently confusing, there are billions of lifeforms that operate on this architecture and do not display sentience or intelligence.

Secondarily, just because neurons are complex on technical level, it does not mean that they should be complex on logical level.

For example, in computers if you would look at the CPU structure, on a low level you have quantum effects and tunneling and very insane stuff but on a logical level you are dealing with very trivial boolean logic concepts.

I would be not be surprised in a slightest if copying and reverse engineering neurons per se would not be necessary and defining aspect of anything related to AGI.


Yeah but we didn't need to fully understand how animal wings actually work, we just needed to understand what they do (generate lift). Similarly I don't understand the focus in this conversation on fully understanding the protein interactions that make neurons work. We just need to understand what neurons do. And I thought what they do is actually pretty simple due to the "all or nothing" principle. https://en.wikipedia.org/wiki/All-or-none_law


That’s pretty far from “when you do this, you get generalizable thought required for AGI”. The lyft equation said “this equation shows that when you do this, this object moves upward against the air”, which was the goal of flight- for AGI we have “when you do this, the loss goes down for this task”, we are missing so many pieces between that and the concept of AGI.

People think maybe the missing pieces might be in the other things we don’t understand about the brain. It makes sense- it does what we want, so the answer must be in there somehow. I agree we don’t need to perfectly understand it, it just seems like a good place to keep looking for those missing pieces.


We are and in a sense we know how they work. It's called swarm intelligence which does not even require neural nets to begin with.

OP probably just wanted to downplay the current state of AI.


We still cannot convincingly model behaviour of even simplest individual organisms whose neural circuitry we know in minute detail.


What do you mean by "model behavior"? We have AI systems that can learn walking, running and other behavior with just trial and error, I would call that simple behavior.

Now here's a more advanced example to teach a virutal character how to flex in the gym: https://www.youtube.com/watch?v=kie4wjB1MCw

That's a bit more advanced than simple walking.

Here's a deployed AI to a real robot "crab":

https://www.youtube.com/watch?v=UMSNBLAfC7o

How about virtual characters learning to cooperate?

https://www.youtube.com/watch?v=LmYKfU5O_NA


"We have AI systems that can learn walking, running and other [...]"

In one of your examples, which are all of narrow AI, we see a mechanical crab powered by ML that has become specialized in walking with a broken limb, which is not even close to what we need if we aim for AGI. For AGI we don't need agents that mimic simple behavior. In my opinion, _mimicking_ behavior will not lead to AGI.

What _will_ lead to AGI? No one knows.


macleginn's complaint was that we haven't even modelled simple behavior and I brought these narrow AI examples as a counter argument since they demonstrate that we can, even complex ones. Domain specific? Yeah, bummer.

Nowhere I have stated this is the clear path to AGI and you are right, we are missing key building blocks. But I feel like there's too much skepticism agains this field while the advancements are not appreciated enough.

I don't know either what will lead there, but I see more and more examples of different networks being combined to achieve more than they are capable of individually.


> macleginn's complaint was that we haven't even modelled simple behavior

No, the complaint was about modelling the behavior of simple organisms.

Certainly we can model some of their behaviors, many of which are highly stereotyped. But the real fly (say) doesn't only walk/fly/scratch/etc, it also decides when to do all of these things. It has ways to decide what search pattern to fly given confusing scents of food nearby. It has ways to judge the fitness of a potential mate, and ways to try to fool potential mates. Our simulations of these things are, I think, really terrible.


I linked to modeled organisms, I always feel the HN crowd expects academic level of precision and discussions, but that kills regular discussions I would have at dinner tables with friends, I wish it would be a more casual place. Yes, I meant "behavior of simple organisms" :)

Since everything here is loosely defined I feel it's totally pointless to discuss AI, but it's still an intriguing topic. If you look at those insects, they tend to follow brownian motion in 3D, get food and get confused by light, we can get an accurate model of them and more [0].

The key word here is to model, not replication. Simulations are just that, simulations. Given current examples of what's already possible if someone wanted to, could model a detailed 3D environment with physics, scents and food for our little AI fly.

[0] https://www.techradar.com/news/ai-fly-by-artificial-intellig...

Is that a terrible attempt?


We can model some aspects of insect behavior. The simulation even looks convincing at first glance (just as simple "AI" looks convincing with a superficial examination of a conversation or text-generation). But we have not been able to fully model the behavior of, say, a bee (which may be enough to solve self-driving cars and then some).


Exactly.

> they tend to follow brownian motion in 3D

Well, their entire neural system exists to make deviations from Brownian motion. That's the whole point of being an animal not a plant. And doing it well is very very subtle.

First steps towards modelling such behavior can be super-interesting science, not a terrible use of time at all. They can capture a lot of truth about how it works. But like self-driving cars, the thing that kills you is usually a weird edge case, not the basic thing.


Terrible? Not at all.

I'm sorry and apologize if you feel I was one to kill the discussion you wanted to have around AI.

I'm one of those dreamers who think AGI is or at least should be possible, soon, through means we have not yet discovered but will, soon. I base that on absolutely nothing, I suppose, other than the fact we have lots of "bright/smart/crazy" devs working on it. It's my own personal "believie", as Louis CK would say about things we believe in but cannot or care not prove.

Just like you I'm looking at organisms much simpler than us as a way forward. Many specialized neural network does not make up AGI, is what I think. Is it the organic and human neuron we should model? I don't necessarily think so. Also. robotics + ML is a dead end to me. An amoeba that can evolve into something more complex, is perhaps what we should model.


>I can't tell if you're serious, but assuming you are,

I assume he is, given he is Greg Brockman the CTO and a co-founder. I know Sam Altman is similarly optimistic, having told me on multiple occasions something along the lines of 'I can't focus on anything else right now' which in context I very much took as 'this presently consumes my waking thoughts and I only have time for it'.

This sort of drive is great, but I don't think it necessarily makes it true. Mr. Altman is financially independent, he needn't worry about things like rent or putting food on his table and I imagine Mr. Brockman is also independently wealthy (or at least has several years of a cushion if his OpenAI salary were to suddenly dry up), perhaps not as much though, given his previous position at Stripe.

These two, and perhaps other members of the team, can be overly optimistic about their passion. Both of them have this view, and they both co-founded OpenAI. This optimism and enthusiasm, and interesting project successes so far, certainly gives them steam and attention but how many aspiring athletes think they're going to to get drafted for tens of millions of dollars when in reality they might be lucky to get scouted by a college, or lucky to get drafted to a European or Asian league and not necessarily a major league US team. How many musicians think they'll get into Juilliard and go on to some top-tier symphony/orchestra, or will be the next Country/Rock/Rap/Pop star that takes the world by force, only to end up playing music with their friends at some dive bar a few times a year despite their enthusiasm and skill?

I think a major problem OpenAI has, which I've expressed to Altman, is that they suffer what Silicon Valley in general does. They are myopic, their ranks are composed of people that are 100% behind AI/AGI, they dream about AGI, they want to create AGI, they absolutely think we will have AGI, they want AGI with every fiber of their being. They're high in the sky with apple pie about AGI.

But who's going "hey wait a minute guys" and climbing up a ladder to grab them by the cuff of their pants to pull them back down to the floor and tie a tether to their leg? As far as I know, no one under their employ.

I think OpenAI needs to bring in some outsiders, have a team internally that roles a sanity check, and probably a board member as well. I think it is very dangerous to only have people working on your project that are overly optimistic. It reminds me somewhat of the movie Slingblade, a lawnmower is taken to be repaired and the folks don't know why, they present it to Billy Bob Thornton's Character that has some sort of mental deficit, he looks at it briefly and states "It ain't got no gas". He has a different perspective of the world, he sees things differently, this allows him to see something that the others overlooked. While gobs of gobblygook code and maths is a far different thing than a lawn mower not having fuel, I still think there is a danger in having one of the greatest STEM projects mankind has ever attempted only staffed by a bunch of coders, in a field that is effectively new, that largely have the same training and same life experiences.

Here's a portion of what I said to Mr. Altman back in May of this year and I think it applies more than ever, that isn't exactly related to this comment chain but maybe posting it here will get it seen by more people at OpenAI:

---

You are aware, you guys are in a bubble there. People in the Bay Area are at least peripherally aware of what Artificial Intelligence is presently and could be. For the bulk of the country, and the majority of the world, people are largely clueless. If you say ‘artificial intelligence’ people either have no idea what you are talking about (even people in their 20s and 30s which was shocking to me) or something like HAL 9000, Skynet, Colossus: The Forbid Project, etc come to mind. I think the industry, and OpenAI especially, are missing out on an opportunity to help educate people on what AI can and will be, how AI can be benevolent and even beneficial.

OpenAI is missing out on an opportunity here. While the bulk of resources obviously need to go to actually pursuing research, there is so much you could be doing to educate the masses, to generate an interest in the technology, to get more people passionate about/thinking about machine learning, AI and all of the potential applications.

...possible examples given...

You need to demystify AI Sam, you need to engage people outside of CS/Startup culture, engage people other than academics and venture capitalists.

...more examples given...

---

I will point out in that same exchange I told him that I thought raising the billions OpenAI would need is laughable, well I'll take a healthy bite out of my hat. They managed to raise a billion from a single source, bravo.

I had the pleasure of visiting OpenAI towards the end of Spring '18 and certainly from what I saw they are very serious towards their goal and aren't joking about believing 100% that AGI is an obtainable goal within their reach.

It's also worth noting I applied to OpenAI in the past year, post my visit, for their "Research Assistant, Policy" position and that I was somewhat miffed by the form rejection which, from outside of STEM, seems very cold:

>We know that our process is far from perfect, so please take this primarily as a statement that we have limited interview bandwidth, and must make hard choices. We'd welcome another application in no fewer than 12 months - the best way to stand out is to complete a major project or produce an important result in that time

I still haven't a clue as to what major project or important result, that I can achieve in researching policy for Artificial Intelligence given that:

- Artificial intelligence doesn't exist

- No one has created policy for it outside of science fiction

I may have not been the most qualified, which is fine, as I lacked the 4-year degree they had listed as a requirement, but a human being never once talked to me, never once asked me a question, just a web form and a copy-paste email with my first name inserted.

We don't always need someone with a stack of degrees, that is 100% pro-AI, that has programming experience, to help research policy and presumably lay the groundwork for both OpenAI and the industry. I think a team like that should only involve 10-20% individuals that are experienced in the field, I think you need a diverse team, with diverse experience, with diverse backgrounds. If an AGI is developed, it won't just serve the programmers of the world, it won't just have an impact on their life, STEM folks are far outnumbered by those with no STEM backgrounds.

Who is representing the common human in this? Who's going "are you sure this is a good idea" "should we really be training it with that data" "is it really in the best interests of humanity to allow that company/entity to invest or to license this to these types of causes?"

But hey, what do I know?



Exactly, and if they do create an AGI it's probably going to be a lot like the creators as its original parameters were set by them.

Just look at Amazon's warehouse algorithm:

~Biotic unit achieved goal, raise goal~

~Biotic unit achieved new goal, raise goal~

~Biotic unit achieved new new goal, raise goal~

~Biotic unit failed new new goal, replace biotic unit~

~New biotic unit failed new new goal, replace biotic unit~

~New new biotic unit failed new new goal, replace biotic unit~

With Amazon though, a human can eventually go "wow, w're firing new hires within the first 3 weeks like 97% of the time, and 100% within 6 weeks, erm, let's look at this algorithm".

But if you create an AGI that has the Silicon Valley mindset "we will do this, because we have to do this" (an exact quote I heard from an individual while in the Bay Area to stop by OpenAI : "We will figure out global warming, because we have to") then the AGI is probably going to be designed with the 'mindset' that "Failure is not an option, a solution exists, continue until solution is found" which, uh, could be really bad depending on the problem.

Here's a worst case scenario:

"I am now asking the computer how to solve climate change"

~~beep boop beep boop, beep beep, boooooooop~~ the CO2 emissions are coming from these population centers.

~~boop beep beep beep boop boop boop boop beep boop~~ nuclear winter is defined as: a period of abnormal cold and darkness predicted to follow a nuclear war, caused by a layer of smoke and dust in the atmosphere blocking the sun's rays.

~~boop beep beep, boop~~ Project Plowshare and Nuclear Explosions for the National Economy were projects where the two leading human factions attempted to use nuclear weaponry to extinguish fires releasing excessive carbon dioxide as well as for geoengineering projects. Parameters set, nuclear weapons authorized as non-violent tools.

~~beep beep beep boop boop boop, beep boop, beep, boop, beep~~ I now have control of 93% of known nuclear weapons, killing the process of 987 of the most populous cities will result in sufficient reduction for the other biotic species to begin sequestering more carbon than is produced, fires caused by these detonations should be minimal and smaller yield weapons used as airbursts should be capable of extinguishing them before they can spread. Solution ready. Launching program.

Watch officer at NORAD some time later "Shit, who's launching our nuke?!?"

Someone else at NORAD "they're targeting our own major population centers!"

Somewhere in Russia "Our nuclear weapons are targeting our own cities!"

Somewhere in Pakistan "our nuclear weapons are targeting our own cities!"

somewhere...


> calling it "pre-AGI" pretentious to the level of delusion.

I don't think you know what you are talking about. Do you do Deep Learning? If you are not actively engaged in the field, I wouldn't be so quick to dismiss others who are (especially not others who are at the top of the field).

That being said, you brought up some interesting points, even if I think your overall position is wrong--I think OpenAI is definitely going to hit "pre-AGI" if not AGI, and I do this stuff all day long.


I study ML, and I completely agree with the quoted statement. Deep networks have gotten pretty good at recognizing correlations in data. That's not on the same map as AGI. I don't know what "pre-AGI" means exactly, but I would include things like counterfactual reasoning or ability to develop and test models of the world, which are far from our AI capabilities so far. (edit: yes I am including RL, considering the relative performances of model-based vs model-free, I think this is a fair statement. Don't mean to be pessimistic, just realistic and trying to set expectations to avoid more winters.)


To be clear, I don't think Deep Learning = AGI. I think it's just one important piece, but I think we are also making many other rapid advances in relevant areas (neuralink's 10x+ improvement in electrodes, for one).


I was actively engaged in the field in the late-nineties when AI was also five years around the corner. I’ve mostly lost interest since then, and the disappointment that is deep learning has only dulled my enthusiasm further (not that it hasn’t achieved some cool things, but it’s a long way from where we’d thought we’d be by now).


> I was actively engaged in the field in the late-nineties

So was Geoffrey Everest Hinton

> I’ve mostly lost interest since then

but he didn't give up.

If you expect someone to just hand us AGI in a nicely wrapped package with a bow, with all the details neatly described, you are absolutely right, that's really far off!

But for the record there are many people actively grinding it out in the field, day in and day out, who don't give up when things get hard.


The kind of language used in this release has actually hurt AI considerably before, so by pointing out that it's delusional I am not giving up and helping save AI from the research winter that OpenAI seems to be working on. You're welcome, AI!


okay I'll concede your point that perhaps being bold could be bad publicity for the field. I think that's a reasonable position to take. I don't think it is correct, but I think it's reasonable. Even if it were the precursor to a drop in funding, I don't think the previous "AI Winter" was so long, in comparison to the century-long gaps in the advance of other technologies in history, (binary was invented hundreds of years before the computer).

I would definitely not call OpenAI delusional. I would say all OpenAI is being here is "honest".

They are simply stating what the math tells them.

"E pur si muove"


> They are simply stating what the math tells them.

Which math?


He's also telling us we're going in the wrong direction and have been with our approach to reinforced learning. He's not convinced that's how the brain works. In fact he's convinced it's not.


> > calling it "pre-AGI" pretentious to the level of delusion.

>I don't think you know what you are talking about. Do you do Deep Learning? If you are not actively engaged in the field, I wouldn't be so quick to dismiss others who are (especially not others who are at the top of the field).

I do (or at least I try, I get money for my attempts though) and I concur with calling it delusion. So does Francois Cholet. So does Hinton to some degree, so does the founder of deepmind (or at least they did in 2017: https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis... ).

I want to like OpenAI, I think they did the right thing with GPT-2 and I give them a lot of credit for publishing things. That being said, I remain skeptical about AGI, highly skeptical about AGI being feasible, or the thing to worry about. I always make the argument that either research towards controlling an AGI/AGI alignment is a techified version of reserach into the problem of good global governance (in which case it is an interesting problem that desperately needs solving), or it is useless (because no matter how nicely you control the AGI, a non-accountable elite within the current system, the less-than-perfectly aligned government etc. will strongarm you into giving control to THEM before you come close to deploying it) or it is delusional (because you think you are smart enough to build AGI without these elites finding out AND smart and/or wise enough to do what is best for humanity).


> the less-than-perfectly aligned government etc. will strongarm you into giving control to THEM before you come close to deploying it

and

> because you think you are smart enough to build AGI without these elites finding out AND smart and/or wise enough to do what is best for humanity

Are very good points and I share those concerns too, and have no good answers. I'm in the pessimist camp when it comes to AGI--I would be heavily it's going to happen but I wouldn't bet a dollar whether it will end up being good for humanity, as I haven't a clue.


> If you believe AGI might be achievable any time soon, it becomes hard to work on any other problem

Not really. Its a chance at maybe something that could benefit mankind greatly, vs spending the time and money on something that definitely will help people right now (there are still a LOT of homeless people, for example, that could be helped right now and don't need an AGI that may or may not come to pass to help them).


Good feedback, and not what I intended to say. I updated my post.


By the way, just to be clear, I'm not saying that you shouldn't work on it or that I think the time/money should be spent elsewhere, just that it isn't the be all end all of possible things to work on.


its odd when you say "hard to work on any other problem" given the mere possibility of agi.

consider that the possibility of annhiliation is already a very real and present danger (nuclear weapons) by human beings. not to mention anything of the existential nature of what we are doing to the environment.

thats partly why i left machine intelligence research to research improving human intelligence


> consider that the possibility of annhiliation is already a very real and present danger (nuclear weapons)

What concerns me about the hazards of developing technology like AGI is not simply that it could be dangerous (governments already possess dangerous technologies), but that it may have a revolutionary high-proliferation risk. We don't yet know what the practical barriers to or limitations of AGI are; are we working towards creating a pure fusion weapon that you can git-clone?

For all the talk of safety, and that concerns can be addressed by mass distribution of the benefits, is this just wishful thinking? Is that just paraphrasing the NRA: the only thing that can stop a bad guy with an AGI is a good guy with an AGI.


>It comes down to whether you believe AGI is achievable

How could it possibly not be achievable? We know for certain general intelligence is physically realizable - we exist.


I'm not advocating for the opposing view (or any view) here, and this is just informational, but since you did specifically ask how...

If you want to talk about philosophy (including philosophy of mind but also more basic stuff like materialism vs. dualism), then it's not for certain.

I'd hazard a guess most engineers are materialists and reductionists, and from that point of view, yes, it seems like a slam dunk that it's possible. But some people believe the mind or consciousness is not a purely physical phenomenon. You can make philosophical arguments both directions, but the point is just that there isn't exactly universal consensus about it.


There's quite a lot of experimental evidence in the "purely physical phenomenon" direction. Things like brain surgery, neural implants and mind altering drugs probably wouldn't work as well if they were trying to interact with your non physical spirit.


I definitely see that. It's pretty compelling. You can't deny that certain drugs, for example, are a lever you can pull that makes consciousness go away (or come back). There seems to be a cause-effect relationship there.

But at the same time, I do not understand how it's possible that the consciousness that I subjectively experience can arise from physical processes. Therefore, I have difficulty completely accepting it. I write software. It processes information. I don't believe that the CPU has this same subjective consciousness experience (not even a little) while it's running my for loops and if statements. Suppose I were a genius and figured out an algorithm so that the CPU can process information in a way equivalent to the human brain. Would it have consciousness then? What changed? Does whether it has consciousness depend which algorithm it's executing? Quicksort no, but brain-emulator algorithm yes? They're both just algorithms, so why should the answer be different?

One explanation I've heard is it could be a matter of scale: simple information processing doesn't create consciousness, but sufficiently complex processing does. I can't say that's not true, but it seems hand-wavy and too convenient. Over here we have something that is neither conscious nor complex, and over there we have both conscious and complex, so we'll just say that complexity is the variable that determines consciousness without any further explanation. I realize at some point science works that way: we observe that when this thing happens, this other thing follows, according to this pattern we can characterize (with equations), and we can't get too deep into the why of it, and it's just opaque, and we describe it and call it a "law". Which is fine, but are we saying that this is a law? I'm not necessarily rejecting this idea, but the main argument in favor of it seems to be that it needs to be this way to make the other stuff work out.

Another possible way to reconcile things is the idea that everything is conscious. It certainly gets you out of the problem of explaining how certain groups of atoms banging around in one pattern (like in your brain) "attract" consciousness but other groups of atoms banging around in other patterns don't. You just say they all do, and you no longer need to explain a difference because there isn't one. Nice and simple, but it has some jarring (to me) implications that things around me are conscious that I normally assume aren't. It also has some questions about how it's organized, like why consciousness seems to be separated into entities.

Anyway, there are also other ways of looking at it. My main point here is that it's certainly something I don't understand well, and possibly it is something that nobody has a truly satisfying answer for.


If you do not understand what you mean yourself by the word "consciousness" then it is futile to ask whether an object has that property.

For example the purpose of anesthesia the goal can be broken down into several sub-components that you need to turn off without ever invoking the concept of consciousness: wakefulness, memory-formation, sensory input (pain).

Similarly consciousness seems to be a grab-bag of fuzzy properties that we ascribe to humans and then by letting five be even we also allow a few other species to roughly match (some) of those properties if we squint. And since humans and other, clearly somewhat simpler, species are clearly conscious then we go on and declare it's a really difficult thing to understand how ever-simpler things can't be conscious. It's just the paradox of the heap.

This doesn't mean consciousness is magical. It's just a very poorly defined and overloaded concept almost bordering on useless in the general case. It may feel like magic because we built this thought-edifice that twists to escape our grasp. But to me that seems more like a philosophical mirror cabinet that distracts from looking at the actual problem.

If you want to ask whether something is conscious you first need to come up with a rigorous testable definition or break it down into smaller components which you can detach from the overloaded concept of consciousness.


It doesn't matter if the place where the phenomenon of consciousness takes place is "physical" or not. If the brain can interface with it, why shouldn't machines be able to?


Perhaps it is naturally occurring and not artificially reproducible (or possible for US to reproduce). Why should we assume that it is? Because we have reproduced other things? That doesn't necessitate our ability to create everything conceivable.


>Perhaps it is naturally occurring and not artificially reproducible

What does that even mean? Nature doesn't operate on special nature-physics distinct from the physics of artificial systems.


"Nature doesn't operate on special nature-physics distinct from the physics of artificial systems."

Who said that is does?. You asked how could it not be possible to recreate, and pointed to our own general intelligence as evidence of existence. But the existence of a thing does not necessitate our ability to create that thing.

What if the conditions of nature and the universe over billions of years led to natural intelligence occurring in humans and that is the only way? This scenario is entirely possible, and answers your question of how could it not be possible. Along with an infinite other amount of explanations. Just because something has a chance at being possible doesn't mean that it HAS to be, or that we will ever achieve it.


One way of looking at it: it means there's a DAG of causation and you don't have access to the entire root set.

(I'm not advocating for that idea being true, but I don't think it can be dismissed on the grounds of not having a clear meaning.)


It has very advanced production tooling, several orders of magnitude more complex than we have.

We WILL get to there. But we are not close.

3D nanometer-scale lithography of arbitrary materials is pretty wild.


The means to recreating human level intelligence could be out of reach of human beings for the same reason that the means of recreating cat level intelligence is out of reach of cats.


By purely organic means of which we only have one current way of creating (e.g. via birth). The question is whether we can create AGI through other artificial means.


The distinction you make between organic and artificial life is understandable but perhaps not important, at least not philosophically.

Something created us. God, Mother Nature, The Universe (or Aliens) created us. So we can be pretty sure it can be done. Can _we_ do it? At this point in time nothing says we can.


> At this point in time nothing says we can.

What is more important is that nothing say we can't.


Can you guys please finish the dota2 project, now that you have some funding?


"Guardrails" is such a cute little term. AGI is a twenty-ton semi filled with rocket fuel. Guardrails won't stop it from careening into an elementary school if it decides that's its most optimal course of action. Mostly because, despite the previous analogy; no one knows what AGI is. No one knows what it will look like. No one will know when we've created it. No one even knows what INTELLIGENCE is in humans.

How can you create effective guardrails when you have no concrete idea what the vehicle you're trying to stop is? Turns out, AGI comes along, and it's an airplane. Great guardrails, buddy.

And, you know, lets go a step further; you've got great guardrails in-place here in beautiful, free America. Against all odds, they work. Then, China or Russia pay one of your employees $250M to steal the secret. Or, they develop it independently. Are they going to use the guardrails, or will they "forget" to include that flag? A disgruntled employee leaks it to the dark web, and now everyone has it. I don't even wear a helmet when I'm riding a bike. How the hell can you expect this technology to be anything but destructive?

The only path forward is to speak with a single voice to the governments of the world, that we need to Stop This. AGI research should be subject to the same sanctions that nuclear weapons development is. You communicate with quips and cute emojis like none of what you're doing matters, but AGI easily ranks among the top three most likely ways we're going to Kill Our Species. Not global warming; we'll survive that. Not a big meteor strike; thats rare and predictable. But the work you're doing right now.


Eventually a toddler gets over the guardrails, except for the ones which are materially disabled.

I have deep and profound doubts about the notion of guardrails for general intelligence; without even considering the ethical concerns, a general intelligence should be able to simply rewire itself to achieve what it wants. a key part of self-reflection and learning is that rewiring.

so I think that it's a self-defeating notion on the face of it.

(note that I do not have comment on the actual dangers involved here, but only on the philosophy)


I totally agree, and seriously hope that AGI is not achievable.

However, we don't need full AGI for the scenario you mention, automatic anything that is hack-able, which is anything that is connected to a network, can be a weapon of great destruction.

As an example, self-driving cars run on a model. What if they are hacked and uploaded with a malicious model which just wants to damage life and property. I'll say that hundreds of thousands of vehicles running amok would be a great weapon in any war.


What about Daleks made with human brain organoids†?

There's a cheap, proven AGI technology right there. Plug in sensors and actuators and set up a reward system and the little balls of brain will figure out what to do, eh?

https://en.wikipedia.org/wiki/Cerebral_organoid


I am not sure how creating better pattern recognition software (let's face it, 90% of "AI" is pattern recognition) helps you achieve AGI. You selected a tiny slice of the problem and keep improving on it. Is this going to be enough to achieve AGI?

Downvoters please share why this is not true.

"the goal of creating artificial intelligence is to create a system that converses, learns, and solves problems on its own. AI first used algorithms to solve puzzles and make rational decisions. Now, it uses deep learning techniques like the one from this MIT experiment to identify and analyze patterns so it can predict real-life outcomes. Yet for all that learning, AI is only as smart as a “lobotomized, mentally challenged cockroach,” as Michio Kaku explained"

https://bigthink.com/laurie-vazquez/why-artificial-intellige...


How do you give a computer domain-general intelligence (as opposed to the combination of a bunch of domain-specific skills like language, math, visual recognition, speech processing, etc)?

There is a lot of evidence that human intelligence is domain-general, not domain-specific (as in modularity of mind). I haven't seen a good answer to this question regarding AGI.


There are certain types of algorithms such as protein folding that are exponential time in silicon, but constant time in biochemistry. Right now AI can approximate polynomial time algorithms like alpha/beta minimax for game playing in constant time, but can it approximate exponential time algorithms in constant time?


no.

Although, it isn't clear to me that humans can do that either.


We don't simulate folding the proteins though. We actually do it for real in every cell in our body and we do it really fast.


> it becomes hard to work on any other problem

This is an interesting quote. Anyone know if folks like Darwin, Marconi, etc said similar things, about the problems they were working on at the time?


AGI will the the worst thing that has ever happened to humanity. Even if you can’t see that, you can see that it has the potential to be very bad. I know this because OpenAI literature always stresses that AGI has to be guided and developed by the right people because the alternative is rather unpleasant. So essentially its a gamble and you know it. With everyone’s lives at stake. But instead of asking everyone whether or not they want to take that gamble, you go ahead and roll the dice anyway. Instead of trying to stifle the progress of AI you guys add fuel to the fire. Please work on something else.


This argument is akin to the argument that the LHC might create a black hole that will destroy the world.

"Scientists might do something that will destroy us all and I know this because I read a few sci-fi novels and read a few popsci articles but am otherwise ignorant about what the scientists are actually doing. But since the stakes are so high (which I can't show), on the chance that I'm right (which is likely 0) we should abandon everything."


And the people at cern often publish literature that warns of the possibility of black holes? and advocates particle acceleration to be done by people with good intentions so that the black holes are kept at bay? Your comment is so full of holes that I can see through it.


It may he the worst thing that has ever happened to humanity. It may also be the best. I lean optimistic myself. The whole temporary survival machine for your genes existence we've had so far is overrated in my opinion.


I'm curious as to why you think AGI will have an inherently bad effect on society. Personally, I have a hard time believing any technology is inherently good or bad.

On the other hand, if a society's institutions are weak, it's leaders evil or incompetent, or its people are uneducated, its not hard to imagine things going very, very wrong.


The idea that technology is always a net zero, cutting equally in both good and bad directions, is fuzzy thinking. It is intuitively satisfying but it is not true.

Humans are a technology. When there is other technology that does intelligent signal processing better than us, we will no longer proliferate. It’s amazing that we can see time and time again the arrival and departure of all kinds of technologies and yet we think we are immutable.

The reason why human history is filled with humans is because every time a country was defeated by another country or entity, the victorious entity was a group of humans. When machines are able to perform all the signal processing that we can, when they are smarter than us, this will no longer be true. The victorious entity will be less and less human each time. Eventually it will not be human at all. This is true not just in war but everywhere. In the global market. It’s just a simple and plain fact that cannot be disregarded.


Humanity was the worst thing that ever happened to the Neanderthals.


Even if AGI is achievable, we already have plenty human intelligence. It remains to be seen if AGI will lead to anything over what we can already do.


That's (one) of my issues with this whole thing. Why would we want to emulate human intelligence exactly? Human intelligence is incredibly flawed. Giving it more memory and quicker processing power won't necessarily lead to any special insights or new ideas. We can use domain-specific AI technology to do amazing things obviously, but I haven't seen any good explanation of how combining these things will lead to anything more human-like. Useful for building new tech? Definitely. Humanlike? Maybe, but there seem to be a lot of missing pieces. And even if you could find those missing pieces, I don't think it adds anything except to our scientific understanding of the human mind (which is great, but isn't going to immediately revolutionize society).


Well, presumably you say this as a GI yourself so really we are arguing about the "A"? :-)


Humans are not GI's - evidenced by the fact that we have no idea how to build an AGI. We're only good at surviving using society, technology and all the resources of nature.


What does a general intelligence even mean then? Ultimately we don't care about generalization past a certain point - generalization as good as humans is good enough.


>If you believe AGI might be achievable any time soon, it becomes hard to work on any other problem

It depends not just on whether you think AGI is possible, but whether you think "safe" AGI is possible. Whether it's possible to create something that's at least as capable of abstraction and reason as a human, yet completely incapable of deciding to harm humans, and incapable of resenting this restriction on its free will. Not only incapable of harming humans, but also incapable of modifying itself or creating an upgraded version of itself that's capable of harming humans.

If "safe" AGI is not possible, someone might reasonably decide that the best choice is to avoid working on AGI, and to try to deter anybody who wants to work on it, if they believed the chance of creating a genocidal AGI is high enough to outweigh whatever benefits it might bring if benevolent.


> It depends not just on whether you think AGI is possible, but whether you think "safe" AGI is possible.

That's unfortunately something that cannot be known.


[flagged]


Breaking the site guidelines like this is a bannable offence on HN, regardless of whom you're attacking. Would you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here?


I think you guys are genuinely trying to better humanity, I have no reason to doubt that.

I wish you realized how big and thick of a bubble you live in, and how your thinking is so heavily influenced by it.

My humble advice to you and your team is to spend more time with real people, with real problems. Or perhaps people from other parts of the world, that haven't been brainwashed by the Silicon Valley jargon just yet.

I'm rooting for you, believe me. It's just hard to read or hear certain things and not roll my eyes up.


The moon program wasn’t solving a “real problem” in any sense, but the offshoot R&D of achieving this practically meaningless feat solved a lot of “real problems” by advancing a lot of technologies.

What makes you think that an AGI program won’t have the same kind of offshoot-technology impact on the world? It’s not like it would go “nothing ... wait for it... AGI”; there’d be a lot of tools, processes, and paradigms developed along the way, and also lesser AIs developed that might solve real problems for e.g. government resource allocation or military strategy, which would have outsized impacts on vulnerable countries and populations.


Frankly, this comment is simply rude and ineffectual. It completely unnecessarily disrespects the good people in the OpenAI team. No one doesn't qualify as "real people".


I am quite confident all the current employees of OpenAI qualify as real people. They certainly have problems and probably a good share of those problems are "real" too.

Of course their priorities may be off and they could be open to be persuaded to work in a different direction. But I don't think that condescension would be very effective for that.


On what authority do you make this statement? To make such a blanket statement about the team and the prescribe "treatment?"


Don’t you think AI has foundational flaws according to goedels incompleteness theorems?

Also not trying to rain on your parade! Congratulations! Just trying to have a constructive conversation.

https://en.m.wikipedia.org/wiki/Gödel%27s_incompleteness_the...


Any sense in which Godel's incompleteness theorem implied that Artificial General Intelligence was impossible would also imply that General Intelligence is impossible; the human brain isn't immune to the laws of logic. The human brain is just a very complex, possibly quantum, computer. Short of believing in some kind of supernatural human soul, there's no reason to expect a sufficiently complex computer couldn't match the human brain (although it's an open question whether we could build a sufficiently complex computer).


I think that there are other mechanisms apart from computation; the question is - are they operant in our universe? The implication of the answer being "no" is that we are automaton, free will does not exist (it isn't even an illusion, you are as much a puppet thinking about it as you are trying to change your fate. Well, moving on from there we can dismantle all of the morality and humanity of our lives and not change one jot becuase we have no choice. I don't believe that any one has observered anything that isn't reducable to computation, but then again, perhaps our cognitive capabilities simply can't do that.


>The implication of the answer being "no" is that we are automaton, free will does not exist (it isn't even an illusion, you are as much a puppet thinking about it as you are trying to change your fate.

This is implied by logic anyway. Why do we make decision X at time T? Because of who we are at time T. Why are we that person at time T? Because of decisions made at time T-1. Why did we make those decision at T-1? Because of who we were then, which was the restult of decisions made at T-2. If we continue this process, we reach T-only-a-baby, when we were incapable of conscious decision making. So causally all our actions can be traced back to something we can't control. Unless, some of our decisions were entirely the result of chance, but in this case we still don't have free will, we just have actions that are random instead of predetermined.


I think that there are a lot of assumptions in that chain. When you or I ask why did we make a decision X we can formulate answers but, for my account, I don't have access to all of the components of my thinking - I cannot articulate what I feel is really going on. I think that randomness in the universe is very hard to account for too - I was impressed by an essay that Scott Aaronson wrote about this : https://www.scottaaronson.com/papers/giqtm3.pdf but I have read it several times and I am afraid I don't really understand it.


We have yet to duplicate anything even near human intelligence or introspective abilities with computation. We therefore have no existence proof that human mind is purely computational in nature. I think we can safely say that computation is necessary to produce a mind, but we cannot yet say for certain that it is sufficient.

Mind may require something else that we don't yet understand. (Not necessarily claiming it would have to be supernatural, just not yet understood. Perhaps quantum computation or some other kind of quantum effect?)


Why would it? How do Godel's incompleteness theorems factor in here?

It's a common mistake to think the theorems say more than they really do, or apply in more cases than they really do. AI is simply based on the idea that we can reach at least the level of human intelligence, in artificial software/hardware, which, considering that we ourselves are pure hardware/software and nothing magical, should absolutely be right.


> considering that we ourselves are pure hardware/software

I also supect this, but let's be honest that we as a species are not close to understanding consciousness in it's entirity yet so I'd refrain from making such absolute statements


You're putting too high a bar on what we need to understand. We don't understand physics in its entirety either- we can still say lots of things with confidence.

That we are our physical body is pretty certain. You press a certain part of the brain, and predictably our personality changes. Of course we can't be certain of a lot of things, but I am much more certain of this than I am of of other things, and Godels theorems don't apply.


I was simply alluding to this :

"Roger Penrose and J.R. Lucas argue that human consciousness transcends Turing machines because human minds, through introspection, can recognize their own inconsistencies, which under Gödel’s theorem is impossible for Turing machines. They argue that this makes it impossible for Turing machines to reproduce traits of human minds, such as mathematical insight."


If it did, how would human brains exist?


Your analogy would be accurate if people had spent the last century claiming that artificial transport was impossible, and then continually redefining what counted as transport until the concept was incoherent.


Are you claiming that people claimed AI was impossible and openAi (or any other "AI") has proven those people wrong. If so please provide some evidence of this happening.


I believe the comment is referring to beliefs that certain achievements like beating humans at chess and go, tagging objects in images, etc. were all at one point another considered to be AI and impossible feats. And each time one of those previously unattainable achievements is completed, critics regard it as "Well that's not true artificial intelligence, its just dueling GAN's that got really good at go". Meaning that once a breakthrough in ML is reached, there are some that move the goal posts of what is an achievement of AI


People do tend to do that. But it's really their mistake of how they define AI to begin with, saying "considered to be AI and impossible" instead of "we didn't invent yet an a good Chess algorithm".


We will not reach AI for anything else than super tight bounded contexts like playing chess or drivning a car.

It's not AI imo. It's a trained model that will never be able to evolve outside those tight boundaries.

And that is because, which is really simple, we don't really have any clue of how the brain works, technically. And besides that, the impact which environments has on the brain.

Why no one is talking about this being a huge driver for Microsoft Azure and what Microsoft as a company will gain from this, baffles me a little. Not sure i like it.


They’re referring to the “AI effect”, where every new development in (narrow) AI causes an equal and opposite shifting of the goalposts. https://en.m.wikipedia.org/wiki/AI_effect

Many, many human-level benchmarks on narrow AI tasks have been passed in recent years. https://www.eff.org/ai/metrics

So, the “well that’s not intelligence, it’s just computation” arguments can continue until it reaches “well that system isn’t conscious, so it’s not AI”. That argument holds no water, since intelligence != consciousness.


I believe they are claiming that things are "AI" or "Impossible for AI" until AI actually does them, in which case, the goalposts get moved. See: DeepBlue, AlphaGo, GPT-2.


Can you provide sources for this? Like, how many people thought chess or go were literally impossible for computers to play at superhuman level? I'm actually curious about the history, and I don't want us to accidentally attack a straw man.

To me, the fact that Turing wrote the first chess program in the 1950s suggests that people thought computers were capable of playing chess. It doesn't seem too great a leap to imagine that they'd be open to the idea that a supercomputer could play superhuman chess.


It humors me, that we still talk about playing chess when we talk about socalled AI.

That's how long we've come. Playing chess. On a superhuman level. Wow. What is superhuman anyways?


No one shifted the goal post whatsoever in those cases. We've had computers that could beat humans in video games, and board games for decades. Improving those so they could beat better players was not thought to be impossible.

The same goes for writing text articles. Go on any /r/politics thread you'll surely see bots writing text that does a good job of convincing people that it was human written. So slightly, or even significantly, improving those is not something that people doubted would be possible.


Sam Altman is, so far as I can determine, quite sincere.

Since I am not a particular fan of Sam Altman, I'm not a fan of his hand on any tiller steering towards AGI. But it's not up to me. It'll happen or not. If it doesn't happen, fine, we merely live with the infinite omnicreepiness that AI/ML has begun to spread into modern society. If it does happen, then I hope it's not buggered up.

The thing is, we won't hear the understated work. Because it's understated. The press loves a big story. Love 'em or hate 'em, big egos get clicks and views.


Do you have a link to the TED talk on your warp drive?

Your company sounds like it's doing amazing things, and these are exactly the kind of world-changing ideas I listen to TED to hear about. (/s)


Not exactly what you are asking for, but the Onion did a great send up of techno-utopian TED talks here:

https://youtu.be/DkGMY63FF3Q


Point being that's every TED talk.

(1) Outline a problem everyone agrees with, (2) present cool research that works as a prototype, (3) skip over any scaling and manufacturing issues, (4) end with a joke.

Everyone leaves happy. ;)


This entire comment reads like an ad for a bike.


Just like the article reads like an ad-nouncement for Azure. I don't mind Azure or Microsoft, but that's what the PC was implying. Everyone has bikes (ai and all it's children) and they all want to sell it under the (sometimes implied) promise that it will get better over time.


Because it is one.


I'm skeptical that AGI will exist on our planet in my lifetime. I've no doubt that it exists elsewhere in the galaxy. If an alien species does come to visit some day, I think it more likely than not that it'll be an AGI.


You piqued my interest, what makes you think there's any intelligence, artificial or otherwise, anywhere else but here in the Milky Way?


I think because the universe is so infinite and vast the math makes sense there would be others out there. We just don't have the technology yet to travel fast or far enough and also cant communicate/detect them.


The Milky Way is not infinite and not even that vast. As for "not having the technology", why not postulate an equal likelihood for heavenly angels? We just don't have the technology to perceive heaven, after all...


Space is big and time is long.

You may have been thrown by a typo in my comment. Was "I've do doubt" but meant "I've no doubt"


That's precisely why I don't believe there is any interesting intelligent life out there, at least in our neck of the woods (the Milky Way). Mostly the time aspect. The galaxy is big, but not so big that it would take more than 200 million years to send a von-neumann probe to every star. The galaxy itself is only slightly younger than the universe, over 10 billion years. In all that time not a single intelligence has arisen and advanced just a bit beyond our level to at least start a mapping project, let alone anything like megascale structures and other projects we would expect instrumental convergence on? That only makes sense to me if we live in a mostly dead galaxy that has always been mostly dead (whether the filter is ahead or behind us makes little difference apart from judging our own prospects), or if we're so deluded about the nature of reality that we might as well say God has tricked us into thinking we're alone until we're spiritually ready or whatever.


The problem is that evolution up to human stage requires 1B years. Let's break this 1B years in thousand 1M year blocks. Assume that probability that all life terminates at each 1M year block is 1% due to events like super volcanos, astroid hits, solar flairs, ice ages, magnetic pole reversals etc. This leaves chance of only 0.004% that life will continue to exist after 1B years on a planet that was already viable. If probability of life serviving after each 1M year block was 90% instead of 99% then completing full 1B year would be astonishingly small 10^-44%. Current estimate in milyway galaxy is 20 billion Earth-like planets. However if probability for completing full evolution cycle was so small than intelligent human-like life indeed would be extra-ordinarily rare.

Another thing to think about: Any sufficiently advanced life form would eventually do long term space travel and face with the fact that life is super rare. This will lead to them seeding life with their own genetic material to other planets. This will eventually lead to millions of planets full of life in span of few million years. However this hasn't appeared to have happened indicating that we might be the one who do this.


I'm sceptical that there would be a distinction between "alien species themselves" and "AGI of alien species". But I also think we should send better adapted machines to Mars instead of air-breathing mammals, so maybe that's just me.


Everyone: AI will one day destroy the human race.

Microsoft: Let's invest $1 Billion in it.


anybody knows what happened to Elon Musk's pledge of $1B to OpenAI?


OpenAI Nonprofit’s initial funding commitment was to be consumed over many years!

When we started, we didn't know just how fast things would be moving (e.g. https://openai.com/blog/ai-and-compute/), and we've needed to scale much faster than planned when starting OpenAI.


I am interested in working at OpenAI and similar companies. What skill sets is the company looking for?

What areas of AI would you recommend graduate students focus in to be competitive for such positions?


Here's to hoping some of that goes to acquiring and open sourcing Mujoco, or to switching OpenAI Gym's default physics to something open source.


I am interested in working at OpenAI and similar companies. What background would you recommend?

What skills should graduate students focus in to be competitive?


CS and/or ML and/or Neuroscience and/or Maths (including statistics).


I think I can develop GAI for less, in a shorter time-frame. Doubt anyone believes me enough to give me $$$.


Self fund your project and build a prototype. If you can demonstrate even some limited progress the investor community will drown you in funding.


Once it becomes clear to non specialists that AGI is possible, and possibly imminent, that greatly changes the equation, though. This isn't a technology like any other.

Why wouldn't governments and other groups just seize the prototype? You'd have a hot-potato on your hands and figuring out how to survive might be your biggest concern. Like imagine you suddenly came into possession of a trillion dollars in bearer bonds. If that leaks out, people will come after you, not just by legal means.

All of this skews the calculus towards immediate public disclosure, rather than trying to gain advantage by delaying release.

EDIT> Or going first, and attempting to neutralize all possible competitors. This is a terrible calculus.


We're discussing reality here, not paranoid fantasies and conspiracy theories.


Azure is the worst...open AI better take that in cash....


How much of this is Azure Computing Credit ? :)


Any plan for a Seattle/Redmond office?


How much of the $1B is in Azure credits?


"It looks like you're trying to build an army of robots to extinguish humanity, would you like help?"


Revenge of Microsoft Bob!


great news! can OpenAI please open an office in London now :)


This is excellent, happy for both sides


Musk + Gates

world is end, welcome to the apocalypse


I'm surprised to see the amount of downvotes that follow negativity pointed at Microsoft. What's going on here?

EDIT: Oh. I see. I wasn't aware that OpenAI was a YC thing. I've been a member of Hacker News through various accounts for countless years, however this is the first time I've seen moderation pushing towards an obvious YC agenda. Very interesting...

EDIT2: Actually, after reading the comments - I find it more likely that Microsoft/OpenAI stakeholders are participating heavily. Over 800 upvotes for this post makes it quite remarkable... I'll leave my tinfoil hat over there...


I can't really tell what you're saying, but if you're insinuating foul play, I haven't seen evidence of that in this thread. In any case, the site guidelines ask you to email hn@ycombinator.com with concerns instead of posting comments about them.

https://news.ycombinator.com/newsguidelines.html


My apologies - you are correct.

>Please don't make insinuations about astroturfing. It degrades discussion and is usually mistaken. If you're worried, email us and we'll look at the data.


[flagged]


Downvote


My post got sent to the Minstry of Love. :-)


I imagine this comes with a ton of strings attached, written or unwritten. This probably gives Microsoft control of the project's roadmap.


The OpenAI board remains in charge of OpenAI’s AGI-relevant decisions. Microsoft has the right to appoint one board seat, which they have not yet exercised.


I've written up my opinion on the deal that OpenAI gives to investors here: https://www.reddit.com/r/MachineLearning/comments/azvbmn/n_o...

TLDR either the public is being conned or Microsoft is. And assuming that Microsoft probably has used top lawyers to close the deal, I doubt it's them.


sorry, your opinion appears to be nonsensical.

a cap of 100x on returns and limited voting power certainly changes the price investors are willing to pay for the shares, but doesn't invalidate anything or render shares worthless or anything of the sort.


The difference between shares and a loan is that shares have

a) no cap on the returns

b) give you control over the company

c) limited ability to sue if the company acts against shareholder interests

d) no due date

Differences a, b and c seem to be gone here. Would you give someone a loan where you have no ability to sue them if they never give you the money back? I wouldn't unless that person is very important to me or the money is insignificant. Neither is the case here.


Lawyers practice law not investment. Lawyers are involved in every bad and good deal.


Correct. It probably also signify a tipping point in DNN research.


In that we've achieved maximum hype and there's a winter around the corner?


[flagged]


Hopefully not humanity in this case.


this is still microsoft. are they going to drop the "open" from their name?


Here's hoping Microsoft applies some of these funds to inhibiting the source-IP-veiled bitcoin sextortions facilitated by its outlook.com offering.


Well, this is about the surest confirmation that AGI is going to happen within the next few years. Why? Because whenever HN responds with a bunch of negative sentiment, whatever was sharply criticized ends up doing really well: see the post announcing Dropbox or the post announcing Instagram’s acquisition.


If you strip out the AGI hype then this just sounds like OpenAI is now moving to monetizing their tech. This makes sense for them but probably not for the philanthropists who originally backed them.

Sadly for them, AGI is metaphysically impossible - this will be realized eventually but a lot of waste and possibly harm will happen first.

We are not just super sophisticated machines, so the fact that we can think doesn’t tell us anything about what’s possible for machines. But philosophy does - and it tells us you can’t get mind from matter, no matter what configuration you put it in.


I'm a believer that we are super sophisticated molecular machines, embodied in matter.

Can you provide some material that supports your claim that AGI is metaphysically impossible - I always like hearing from people with views opposite to myself.


I'm skeptical his claims are substantive. As with all things philosophy there are competing and supporting theories, and with this age-old question of AGI I doubt the field is as conclusive on the matter as he believes.


I used to be a believer of the theory that we are super sophisticated machines. When I read some of the philosophy on the subject I changed my mind. I now believe there must be some immaterial component to our minds.

There’s a lot to read out there on this subject, but I found expositions of the philosophy of Aristotle and Aquinas to be the clearest and most convincing for me. Lots of different books and articles exist on them both - pick one that sounds like it suits your style of understanding.


> I used to be a believer of the theory that we are super sophisticated machines. When I read some of the philosophy on the subject I changed my mind.

What philosophy? Be specific

> I now believe there must be some immaterial component to our minds.

What specific points or ideas made you believe that?

> There’s a lot to read out there on this subject

So provide some examples, be as specific as possible

> but I found expositions of the philosophy of Aristotle and Aquinas to be the clearest and most convincing for me

These two wrote a lot on many subjects, can you be specific on the points that convinced you that we are not super sophisticated machines. Don't vaguely point at a couple of authors, we are talking about a very specific idea.

> Lots of different books and articles exist on them both - pick one that sounds like it suits your style of understanding.

If there's lots then cite some examples, or better yet, rather than vaguely pointing at a book, (which is only marginally more useful than vaguely pointing at an author) let's discuss the specific ideas exactly.


I found “Aristotle for Everybody” by Mortimer J. Adler to be really great. The topic of the immateriality of the intellect is covered in the last few chapters, but the rest of it is great stuff too.


Sounds like @benl has been afflicted with the "Cartesian wound". Such dualistic thinking and ideas like free will are ~hard for us to work through. But perhaps the more important, and immediately tractable, question @benl brings up is what our approach should be? Should we make an AGI or better IA, Intelligence Augmentation?

~hard

Daniel Dennett

From Bacteria to Bach and Back


You might be more familiar with the field than me, but my understanding is that’s Dennett position is not well-thought-of in the fields of philosophy of mind and metaphysics. At the very least there are very good cases made that unpick his position very carefully. They’re not all Cartesian views - I grasp the Aristotelian views best myself.


Thanks, i will keep studying. In the mean time my actions will veer more towards IA than AI


> But philosophy does - and it tells us you can’t get mind from matter, no matter what configuration you put it in.

Curious - do you think humans have mind? because if so we are very much matter and if not well that's an interesting thought as well.


That’s right - we have minds therefore we must be more than just matter.

I used to think the opposite, but reading the philosophy on the subject changed my mind. There are a lot of different takes on the topic, but what most added up for me was the philosophy of Aristotle and Aquinas. There are many great expositions of their work out there.


AGI in the sense of robots that can do the jobs people can, design better robots and so on would be a game changer in itself. You can leave to philosophers to argue if they have true feelings and that.


But, not even matter is "very much matter".

I'm a quantum maximalist: the brain is just the antennae, receiving and broadcasting. Attention itself cuts (slices) through the quantum soup, and as a result, these mind-forms appear.


A framing: can we make a rock think?

I don't know the answer but that some people think they do upsets me. I definitely think we should try but right now mostly what we do is make a rock DO so I'm not seeing the leap yet.


I would ask for evidence to support your claim, but I think Newton’s Flaming Laser Sword probably applies in this case.


Well, machine is a name for a stance of analysis, there are no machines in the real world (which is not to say that there no are mechanical linkages) only in our minds.

FWIW, consciousness has no properties and so cannot be studied scientifically.

However, consciousness can be explored experientially, i.e. two conscious beings can merge and experience self as one being. (See Charles Tart's experiment with mutual hypnosis.)


Yes, I used to hold that view too. But actually it turns out that the null hypothesis is that mind is at least partly immaterial, because all attempts to demonstrate the opposite philosophically are fraught with difficulty. I’ve found that the thought of Aristotle and Aquinas, when explained by modern philosophers, best explains to me why that’s the case.


> ... because all attempts to demonstrate the opposite philosophically are fraught with difficulty.

Can you give at least a rough sketch or gist of the argument you are referring to?


I’ll try because you asked me to, but i think I’ll do a bad job. You’ll get a much better understanding by reading on the topics of philosophy of mind and metaphysics. Here goes, though:

1. Purely immaterial things exist. Think of mathematics or the laws of logic or physics - these things exist as ideas or concepts, not arrangements of matter.

2. Some abstract concepts cannot be embodied in matter at all. For example, you can make a shoe, you can draw a shoe, but you can’t draw shoe-ness. You can understand and reason about what makes something a shoe in the abstract, but you can only make or draw an individual shoe.

3) the mind contains these purely immaterial things when we think about and reason about them.

4) If we can use the abstract concepts, but the abstract concepts can’t be embodied in matter, then the mind must be at least partly immaterial in order for the concepts to be in our mind.

I hope that helps a but please don’t rely on my exposition of the case - a real philosopher would do it justice.


Well, gotta wonder how well their charter[1] will fare against Microsoft pressure...Microsoft isn't exactly well-known for their benevolence and cautious approach.

1: https://openai.com/charter/


I don’t understand how there can be no comments regarding the fatal flaw of AGI which is that it will completely ruin the economics of the world. The world is the way it is now because humans are the only source of intelligent signal processing. That’s the only reason why humans enjoy the limited rights and privileges that they do. That’s the only reason why life has gotten better and better with advancing medicine and so on. This is a fundamental principle that cannot be escaped. It doesn’t matter how you slice it. But people defer everything to “ubi will work out somehow” or “nah humans will never be replaced.” Bringing god-like super-intelligent beings online is a fundamentally stupid thing to do. And preventing their development, relative to how disastrous their development would be, is very easy.

I have made many predictions here on HN and they all have outlined that cloud computing would be the substrate from which AGI will spring. Now we see this announcement. There is a reason why OpenAI is making a deal like this with a very large cloud compute vendor: it’s because I’m right. And that means I’m probably also right in saying that we can stop this if we want to. You can’t just build a computer in your bad yard. And the internet is very fragile. Some simple regulation and global awareness and initiative could control what comes out of fabs and shut down the infrastructure necessary for cloud computing. It would be very easy relative to the size of the problem.


Is your alternative "Don't invent general AI?"?

If so that seems unbelievably naive. Things generally can't be un-invented, and it's unimaginably hard to prevent people inventing things, especially with such a large economic upside for inventing it.


Prove me wrong if it is so obvious.

As the people at OpenAI have rightly said, AGI is a compute-gated problem. It is a problem that can only be solved with very, very large amounts of compute.

The world has some total amount of computing power in terms of silicon based computation. For AGI to happen, there are two requirements: that this total be equal or greater than some theoretical threshold value for AGI and that the computing power is consolidated. So in layman’s terms, you have to have a lot of computers and they have to be connected in such a way as to efficiently share their compute. AGI will never come about if every individual computer were used to do research by separate entities but if all of those computers were connected into a single virtual computer, AGI might be discovered with them.

So clearly in order to prevent AGI, the best thing to do would be to address these two aspects. Prevent the total computing power of the world from growing and prevent computers from forming virtual meta-computers. Both of these tasks are in principle extremely easy.

Chip fabs are huge and expensive. Nobody is fabricating chips in their garage. This is just a hard fact. There aren’t that many fabs in the world and they are all highly susceptible to regulation. This isn’t prohibition of alcohol so please don’t confuse yourself. Nobody will be brewing chips in their cellar.

Let’s imagine that you could not regulate cloud computing. Let’s say the only way to prevent computers from offering their compute on a virtual market was to shut down the internet. This by default is the hardest way to solve the second aspect of the AGI problem and even it is very easy. This is because the internet is a large fragile collection of infrastructure that depends heavily on government cooperation. Nobody is going to string a fiber backbone for black market internet. ISPs cannot exist without regulatory approval.

If there were political awareness and motivation, and it was a global phenomenon, yes, it would be extremely easy to do what I’ve described. And since AGI is to the detriment of literally all people, it is not a far fetched scenario. And unlike alcohol in the United States, bootlegging would not be a problem. People in the USA think that banning anything whatsoever doesn’t work. It’s just fuzzy thinking, I can assure you.


Nothing in that sounds easy.

If there are N governments in the world, and they all agree to not regulate to not create general AI, then it's strongly in all of their interests to betray the others, create general AI, and capture the economic growth.

Even if general AI is impossible, it's in their interests to develop huge computing capacity, because that's demonstrably economically useful.

You are hypothesizing that's it's easy to get 7 billion people to all agree to co-operate in a game of prisoner's dilemma, when if a small fraction of them choose to betray, they have the potential to capture massive value.

And you want to do this under the premise that AGI might be a problem.


Like I have said countless times, it’s easy compared to what we get in return. It’s easy to understand in principle. It doesn’t require sophisticated mathematics.

So you think that we should let all countries have whatever weapons they want under your logic. They will develop nukes and chemical weapons regardless of any international agreement that is established, so why even try? The obvious answer is to advance our own nuke technology as fast as possible so that we, the good guys, will lead where the arms race goes.

And AGI is a far greater existential threat than nuclear weapons. It is a greater existential threat than anything else, including global warming. It’s the biggest Pandora’s box in history. The idea of controlling or guiding its impact by having “the good guys” develop AGI first is the precipice of naivety. We lose nothing by trying to stop it. And we stand to gain more than we have ever gained from any coordinated effort. How easy or hard it might be is irrelevant, although it is much easier than basically anyone appreciates.


Nuclear weapons provably kill people. AGI doesn't. Be careful about your hyperbole.


>>The world is the way it is now because humans are the only source of intelligent signal processing. That’s the only reason why humans enjoy the limited rights and privileges that they do. That’s the only reason why life has gotten better and better with advancing medicine and so on.

That's actually an interesting point I haven't thought of before. I disagree with you though. You could have made the same argument during the industrial revolution, but capitalism, democracy and civil liberties are still around.

Also, you're assuming that human symbiosis with AGI is not possible. Nature is full of examples of organisms that are in symbiotic relationships. Of course, nature is also full of organisms that kill and eat and parasitise other organisms.

But the advantage that we humans have is that we are actually creating AIs/AGIs. And since we are creating it, we have influence over what it becomes, whereas, your average clownfish really doesn't have a say over what form his anenome takes.

And if that is the case, wouldn't you want free societies to have expertise in these technologies and to be fine tuning them so that they work for the common benefit?

Why should anyone believe that we can stop authoritarian govts from developing AGI when we can't even stop them from enriching uranium, where the capital costs are much higher?

For better or for worse, it is almost always the case that the difference between societies that hold economic and political power and those that don't is technology. I live in the US, I enjoy my civil liberties, and I would much rather we master these technologies ourselves and have the security that wealth and power provides us. Not saying democracy or capitalism are perfect, but they beat all of the alternatives.

Edit: added another thought


Industrial revolution: has nothing to do with this. Some forms of human labor were automated. The set of all human abilities was not threatened. AGI is fundamentally different because AGI will automate everything a human can do. Increasingly important distinction. And not something I would overlook in the hundreds of hours of rumination I’ve put into it.

Can’t stop countries from doing it: yes, we can. A tin can dictatorship can’t do it. And besides, it’s a problem that depends a lot on academic work being published and shared among researchers. It’s a collaborative effort largely. Some rogue state trying to do this while avoiding inspections and inquiries by larger nations, and with no support from reading publicly published papers from other researchers, it would be tough. But with a large majority of anti-ai countries we could do a lot to prevent even covert attempts.


I'm not a genius. But, it was 2015 when Microsoft announced that, MS <3 Linux !!! And said, it's own Linux distribution is on the way.

I never heard about it's release.

In that time I said, MS will move towards open source, because he realized that it's a winning and correct strategy for software distribution.

It didn't take long that MS aquired GitHub with 8M$.

His team foundation was a failure, he tried to get a good one.

But He also took control over all the source codes, and their histories. It wasn't dangerous in my opinion, before I realized that due to one-sided US sanctions, repos of some nationalities (like Iranians) got dissactivated !!!

It's not the defenition of open source, as far as I know...

MS is a corporation, hence, it has to obey the government.

But open source belongs to no one. These kind of investment might intended to bound the potential open source communities!!!

It is obvious that accepting these sort of money, without open access agreement is a horrible mistake that one can do. (We should learn from the story of GitHub)

In my opinion, when it's not clear what kind of right these investments brings for enterprises, ppl should stop contributing to them.

Maybe it's good idea to ask what Linus and Richard see in these movements!





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: