Hacker News new | past | comments | ask | show | jobs | submit login
Elon Musk AMA (reddit.com)
569 points by kfinley on Jan 6, 2015 | hide | past | favorite | 152 comments



I found his comment regarding learning to be the most insightful. It is a more developed explanation of what I have found to be an effective strategy for me:

> One bit of advice: it is important to view knowledge as sort of a semantic tree -- make sure you understand the fundamental principles, ie the trunk and big branches, before you get into the leaves/details or there is nothing for them to hang on to.

Without the structure of prior knowledge, I never understand or remember facts; however, when I've had the time to develop that "first principles" knowledge, I can usually grasp and understand the significance of minutiae.

How do you of HN learn? Is it similar?


It seems like common sense, but it's so difficult to do sometimes. I agree completely.

When I first started learning Linux for example, I didn't just learn the commands I needed to do certain things, I tackled everything. I spent months and months learning everything I could about it. I bought a giant Linux book and went from cover to cover. I learned about things I would never use (and probably still haven't).

I pushed myself to recompile the kernel even though I didn't need to. Then I did it probably 50 more times that month. No joke. Crashed my system. Rebuilt it. Rinse, repeat.

After laying down that foundation in the 90s, I've kept up on it but Linux is so very "easy" for me. Setting things up and getting work done is extremely intuitive, far more so than it is in Windows or OSX. So when people ask me why I prefer it I tell them it's a personal preference because it's so easy for me, and I even I forget that foundation I laid.

I have taken on other pursuits the same way, such as development but I notice any technology that I half ass learn just to get stuff done.. is hard. Sometimes I wish I had enough time in my adult life to build such a strong foundation in something like.. JavaScript for example. And I bet if I added up the time I spent struggling in the beginning I would have been able to do just that.

But yeah, long story short this is absolutely the best way to learn something. Build that trunk.


You may find that, for some popular software technologies now, so far no one has organized the material into a solid tree with a trunk and a few large branches that quickly provide good paths to any of the leaves. Instead you may be looking at a noxious vine or even a patch of jungle with some poisonous plants and reptiles.


I have definitely noticed this, which is part of the reason I haven't "dug deep" with them, and instead learn enough of it to get a job done.


It takes good effort to organize a significant body of knowledge into a tree with a few big, short branches.


I highly recommend Make It Stick. Spaced repetition, testing effect, effortful learning, retrieval training, constructing mental models... it's chock full.


Another vote for Make It Stick. I did much better in my classes and was much more efficient in my prep using the SPRInt method (Spaced Repetition, Interleaving topics, Testing). I tried to read 'How we learn' but it really couldn't hold a candle to Make It Stick.


I agree that fundamental is everything.

I just believe that almost everything in life is a skill and skill requires practice.

I've came across a research paper about deliberate practice and Peter Novig article about Teach yourself how to program in ten years.

This resonate and reinforce my belief of practices is everything.

As for the intricate of how, I'm more of a visual and kinestic kind of learner. Auditory suck. Also I need a book, I sit down and write notes first and then do problems. From there I usually look for a video on that subject for a secondary source. Most of the time a secondary source will give a different view on the subject matter and I get insight at a different point of view. Or that the second source explain it more better or fill in stuff that I didn't realize I gloss over or missed.


>when I've had the time to develop that "first principles" knowledge, I can usually grasp and understand the significance of minutiae.

Well, the fact that you've found what works for you is a good thing. However, 'first principles' is subject to subjective interpretation. You can 'go down the rabbit hole' as it were, to any level. Should you have deep knowledge of electronics before learning computer science? Should you have deep knowledge of physics and chemistry before learning electronics?

In my opinion after a certain level, all knowledge is multi-disciplinary, and the boundaries of what constitutes roots, branches, leaves is extremely fuzzy. Also the distinction between theory and practice makes the boundaries even fuzzier.


> In my opinion after a certain level, all knowledge is multi-disciplinary, and the boundaries of what constitutes roots, branches, leaves is extremely fuzzy. Also the distinction between theory and practice makes the boundaries even fuzzier.

Of course. I think this is the point of this learning style. After learning the first principles of various topics, the broad web that is higher knowledge is available to you.

Suppose, for instance, I wanted to learn how computer science worked from first principles. This study involves math, electronics, physics, and many, many more subjects. To accomplish this, I would pick one of the key, pure tenets and learn it. Let's say I choose math. I would then learn the key things I need to know about math and then move to electronics and physics, and etc. After knowing these, I could confidently approach the "web" of computer science because I have anchorpoints.

I think it's safe to view higher knowledge as a web supported by the anchors of "pure" subjects. After a while, these higher subjects are built upon and become pure topics themselves. Epistemology and the classification of knowledge is really a fascinating topic.


Yeah I agree this is an important idea. I would suggest a couple tests for whether your knowledge is sufficiently connected in way that Musk advocates:

- (if you went to college) Did you have moments when the different courses connected? I think when people are poorly educated in college, it's because of this unfortunately common experience: they learn a bunch of specialized and disconnected subjects, never relate them to anything in their lives, and then forget them all.

I remember the subjects in CS/Math/EE starting to connect more and more around junior year, and I liked that feeling of a light bulb going on. You have to make a bit of extra effort. I did some little experiments outside class. I remember writing Matlab program (an "engineering" tool) to do some experiments in non-Euclidean geometry (pure math).

Of course there are some subjects that never connected, and I forgot those things.

When you have that semantic network, it lets you evaluate new ideas and designs more quickly. You see which low level principles come into play from the high level variables.

- (if you are a programmer) I think there's a pretty clear "semantic tree" in computing: from computer architecture, to OS, to programming language, etc.

So the test is: If you are generally satisfied with how computers/phones/etc. work, then I would humbly suggest that your semantic tree of computers isn't very well fleshed out :) I think any good programmer should see lots of areas where the status quo is just a result of path dependence and not actual any design principle.

When you have a good knowledge of all levels of the stack, then you can be creative. For example, I'm looking at Xen right now, and it has dawned on me that paravirtualization is a great idea (or perhaps great hack).

The related Mirage OS / unikernel line of research is another great example of connecting all the dots, and coloring outside the normal lines. 99% of programming jobs are basically coloring within the lines, where it doesn't matter if you have developed this semantic tree or not.

Somewhat related: there were some recent threads about organizing personal information, and I wrote about using a Wiki: https://news.ycombinator.com/item?id=8753599

Some people talked about using a journal to record thoughts or knowledge, but my point was that hyperlinks literally model the relationships in your head, and thus are superior for information organization / recall.


There was a moment in, I believe, my diff eq course where the relationship between derivatives/integrals and the Laplace & Fourier transforms suddenly became crystal clear. That wasn't what the lecture was even about, but from that point on everything got a lot easier to understand. I'd taken two or three EE courses where we bounced back and forth between the time domain and frequency domain, and diff eq was my fourth calculus class, so both topics were already quite familiar to me, but groking the relationship between them made everything so clear.


Mind of a Mnemonist is a nice book about this. It's not specifically about learning, it's about something else, but there are some rather interesting notes in it.


That quote is basically how I learn. I know concepts, not facts and recreate the facts as I need them. That being said, it's definitely not how everyone learns and I would classify it as bad advice: schools have the correct learning strategy for the vast majority of the human race, so stick to that.

If you are one of the outliers (as you have said you are) you would have figured this all out a long time ago, even if you cannot articulate it.


Schools massively vary in learning strategies though. Saying to stick to schools rather than teaching yourself, as a learning strategy, is a bit like saying to stick to food from shops rather than trying to grow your own, as an eating strategy.


If you're new to growing your own food, I indeed highly recommend you initially stick to food from shops as you start learning to grow your own, transitioning gradually as your skills improve and can rely more on yourself. If you completely & abruptly stop buying food, relying entirely on your own farming, with no transition period, trust me you'll starve.

Likewise, stick to schools initially until you are sure you've got a solid grasp of core concepts, as taught & validated by people who know what you don't but should, then start transitioning away as your education can stand on its own. I've known too many "self taught" people who, while yes they can function in industry, suffer gaping glaring holes where early formal thoroughness would have closed them.


I went to 5 in my youth (moving cities/countries) and they didn't vary much - however, they were all situated in Africa and in many ways we are very backward here. Maybe that's why I have that perception.

What you say does make sense.


I wouldn't be at all surprised if they were sort of similar to my high school, as for the most part it was the basic model of regimented desks, rote learning, streamed classrooms and strict delineation of time into subjects with little crossover.

As far as I can tell, this model was initially developed for training the middle ranks of the aristocracy in how to be officers in the army, and it only ever really works if you are allowed to beat or drug the children as otherwise it is almost impossible to get them to pay attention while sitting still in rows for an entire day. Which is probably why our classrooms got smashed up by bored pupils fairly regularly.

On the other hand, I have a mate who went to a Steiner school, which he describes as 'the first school he didn't burn down' and there it is an entirely different model that is centered around development rather than training. If I had any kids I would be looking for a school for them that was more in line with that kind of environment. http://en.wikipedia.org/wiki/Waldorf_education


It's actually strange how much of modern society is inadequate. First ROWE[1] and then Waldorf education (an interesting read, thank you). We somehow turned assumptions into facts for so many facets of society and it's taking us decades to undo that mistake.

[1]: http://en.wikipedia.org/wiki/ROWE


We have a habit of reaching local maxima in some slightly hilly fields and then declaring them to be the highest mountains possible.


Definitely. When I'm learning some new discipline, the first thing is usually reading definitions of the most common terms, then some basic introduction and only after that going deeper to the direction I originally needed. It doesn't always work with highly specific knowledge (you can't expect to be an expert just after reading few books), but generally I find it the best way to learn.


Yes, early in my career I used that analogy of a tree, large branches, small branches, and leaves.


The moderators of r/IAMA apparently deleted several of the highest voted questions.

http://www.reddit.com/r/teslamotors/comments/2rgzgo/official...


Yeah, I've been following r/spacex for a little bit.

When they started trying to work out some good questions a day ago someone suggested that everyone upvote the resulting questions but the mods there quickly shut that down. [1]

The reasons given for deleting the comments was specifically what everyone was trying to avoid, that is voting brigading. As far as I can tell no one was asking for votes, simply working together to produce some high quality questions.

[1] https://i.imgur.com/lBCWvQh.png


That's the attitude that they say murdered wikipedia. Rules are there to foster a better community, not to kill off good community initiatives. As soon as the adherence to the rules becomes more important than the quality of the product you've basically lost it. The whole trick is to know when not to apply the rules. AMAs are an exceptional item and a different ruleset for AMAs would not be all that hard to imagine. Pity.


Reddit hasn't been about community in about a year, and /r/IAmA is a huge part of that. It's a huge profit center for reddit and they moderate it heavily.


"Murdered" Wikipedia whose fifth pillar is "ignore all rules"?



Sure editor numbers are in decline, that doesn't support your point. Ignore All Rules is one of the fundamental principles the site was founded on.


The point is that that fundamental principle seems to be ignored by people that prefer to bicker over the rules rather than to be otherwise productive:

https://news.ycombinator.com/item?id=8596682

And that this in turn is what drives editors away. So it very much supports my point.


Wikipedia at this point is one of the primary sources for everything on the internet. At the early days of wikipedia, you could pick any topic and there would be major articles unwritten. It was far best to have some article than a great article. I can't comment exactly on the editor situation of the wiki, but it's to be expected a shift to more specialized and aggressive "curation" of articles, specially of more solidified topics. Wikipedia's fantastic performance contradicts this argument.


> Wikipedia at this point is one of the primary sources for everything on the internet.

That is explicitly what it isn't, anything but that.

> I can't comment exactly on the editor situation of the wiki, but it's to be expected a shift to more specialized and aggressive "curation" of articles, specially of more solidified topics. Wikipedia's fantastic performance contradicts this argument.

Wikipedia would still be a fantastic resource if nobody contributed to it from today forward. But that does not mean it couldn't be a whole lot better without the army of lawyer wannabes that are in a tug of war over who gets to have the most power over others by citing policies until the cows come home.

Lots of long time contributors have left because of this and the exodus is far from over. I agree that there is an expected shift to curation but the fantastic performance of wikipedia is not in any way evidence for there not being a significant negative undercurrent at work.

That's just evidence of how good the concept originally was and how much momentum it has built up.

Any kind of success will attract two kinds of people: those that wish to contribute and those that see it as a means to their personal ends, to get a piece of that success. Since wikipedia is not big on credit for contributions the only place where people craving for recognition get to achieve their fix is in becoming 'editors', and unfortunately the motivations of those editors are not always pure.

See elsewhere in this thread for some of the more bizarre displays of such behavior.


Google uses it for their semantic searches and Apple/Microsoft use it for their personal assistants. So this idea that Wikipedia isn't a primary source of information is nonsense. It is.

Until it stops being the first search result for the majority of "typical" searches it will remain a primary source of information.


I think you're confusing the parent's use of "primary" + "source" with the lexical item "primary source". The parent presumably meant "one of the most main sources [to use as one's only or first point of research]".

Sort of separately, but as threeseed points out, Wikipedia is frequently used as a corpus for primary research, so technically it's that kind of "primary source" too.


Your point seems limited to the inclusionist / deletionist discussion.

But you don't address the problems that some good faith editors have with making edits to improve the project.

These problems include over-zealous reverts by people making rapid automatic edits -- sometimes in a misguided attempt to show they "work hard" is a drive for adminship; page ownership and the accusations of bad faith that go with that (BDR fails hard when you have a group owning a page).

Wikipedia has strict socking policies so most experienced editors never try making a new account to edit, but I recommend that any experienced wikipedia edit tries this at least once a year. (And socking is allowed in this case because IAR)


You're making leaps of logic based on the occasional HN anecdote. Numbers have been in decline since 2007, what percentage of those left over bickering about rules? Do you have any evidence for this given that IAR has been in place as a core principle throughout?

IAR can be and indeed is invoked all the time, making Wikipedia a bad example of how strict rule adherence stifles a community. It was quite a simple point that needn't warrant downvotes, italics and so many HN searches.


> Do you have any evidence for this given that IAR has been in place as a core principle throughout?

> It was quite a simple point that needn't warrant downvotes, italics and so many HN searches.

You ask for more evidence in the same comment in which you rant against 'so many HN searches', do you notice the inconsistency there?


Happy to continue to debate this point (possibly on another medium?) but no, a comment 437 days ago on HN saying "I once tried to edit but was reverted" is not evidence that too many rules have "murdered wikipedia", your original contention with which I took issue.


Can you provide any on-WP examples of people using IAR sucessfully?


"The encyclopedia anyone can edit" is another rule the site was founded on, yet that's clearly not true. Look at the hostility that IP editors face even though most good wikipedia edits come from IP editors and IP editors are less anonymous than logged in editors.

Try using WP:IAR anywhere on WP today and you'll quickly see how far WP has moved from founding priciples.

EDIT: I mean, just look at usernames. You're supposed to be able without a login, but sometimes that causes problems. So you go to create a username. The software has a list of words that you can not use (very. Few people think allowing a username like "JewKiller666" is a good idea). But then there's a username policy. This has been reviewed to make it more friendly to new users. But the application of those rules is still pretty hostile.

http://en.wikipedia.org/wiki/Wikipedia:Username

That's the policy. See the changes to the "misleading" names section. That section had to be expanded because editors using their real name in a different script (eg, Japanese users) were being told their unicode name was misleading. Or a user with a eg psuedo-random string of characters was told that their name was confusing, even thoigh there wasn't any other name or namespace to confuse "kejdhdkaksaas983" with.

The "dealing with inappropriate usernames" section required a lot of work to prevent the admin-wannabe users from making many reports.

Once you've picked a name that gets past the software's filters but which an editor -or bot- thinks is bad you face:

1) templates. {{subst:uw-username}}

2) a RFC http://en.m.wikipedia.org/wiki/Wikipedia:RFC/N

3) an administrator notice board http://en.wikipedia.org/wiki/Wikipedia:Usernames_for_adminis...

Notice that bot reports which the not admits may be low quality get sent to UAA, not the lower levels of discussion.

4) a holding pen http://en.wikipedia.org/wiki/Wikipedia:Usernames_for_adminis...

This convoluted conflicting mass of policy is hostile to new users, especially in the way it gets applied by editors. Just try using "WP:IAR" during this process.


Wikipedia has spent many megabytes of text to argue about hyphen, minus, en-dash, and em-dash.

These arguments (different arguments among different people) spread over diffferent pages and different spaces. They happened on article talk pages; in meta space (village pump, the WP manual of style); in admin spaces (ANI); even with some ARBCOM case.

There's easily 500,000 words about hyphen, minus, en-dash and em-dash on wikipedia.

https://news.ycombinator.com/item?id=8600342


> fifth pillar is "ignore all rules"?

Go and make some useful edits and see how that works out for you. Rules are strictly enforced above all else.


Unsure of how much editing you've done but I've made over 10,000 edits over the past 9 years and that's not been my experience (google my HN username + wikipedia).


I wonder if it's an experienced editor vs. newbie thing.

Everything I try to do is immediately reverted and sixteen rules are cited. Extremely frustrating, and it happens to many others in the community I'm in (Driving around the world) to the point I setup our own wiki so we don't have to deal with the BS bureaucracy of Wikipedia.


One of the few things you can do that will get Reddit admin attention is to "manipulate" votes. It's not surprising that mods are cautious when people say "get this whole sub to upvote the post".

From a first reading that screenshot sounds like vote-brigading, not like using a single thread of questions within a sub to organise a list of great questions.


Yes you lose some context in the screenshot.

That comment was posted to the single thread of questions in the sub, and was quickly shot down by the subs mods (u/EchoLogic). Here is a link to the actual comment if anyone is interested in having a look:

https://www.reddit.com/r/spacex/comments/2rb303/elon_musk_is...


I was trying to follow what happened. It appears that prior to the start of the AMA, various subreddits, such as r/teslamotors, collected questions within their subreddit that they wished Musk to answer, then posted them in the AMA and upvoted them.

Moderators of /r/AMA see that as "vote brigading" and hid or deleted the questions so that Musk could not answer them.

Did I get that right in terms of what happened?


Well the AMA was supposed to be SpaceX only. I'm not surprised that questions about Tesla were deleted or ignored.


Nothing in the original post by Elon Musk states that.

Its a "AMA" or "Ask me Anything."

Original Post: >Zip2, PayPal, SpaceX, Tesla and SolarCity. Started off doing software engineering and now do aerospace & automotive.

>Falcon 9 launch webcast live at 6am EST tomorrow at SpaceX.com

>Looking forward to your questions.

>https://twitter.com/elonmusk/status/552279321491275776

>It is 10:17pm at Cape Canaveral. Have to go prep for launch! >Thanks for your questions.


That's not how AMAs work, though, right? I thought the whole point of AMA was that questions could address any topic.


Generally I enjoy reddit, but when it goes wrong it really goes wrong. It shows the side of itself many prefer to ignore, that being that the petty whims and power plays of mods become more important than the discussions


"The best teacher I ever had was my elementary school principal. Our math teacher quit for some reason and he decided to sub in himself for math and accelerate the syllabus by a year. We had to work like the house was on fire for the first half of the lesson and do extra homework, but then we got to hear stories of when he was a soldier in WWII. If you didn't do the work, you didn't get to hear the stories. Everybody did the work."

Amazing.


If you just want to read the questions/responses: http://www.reddit.com/r/tabled/comments/2rh6si/table_iama_i_...


I also made a site (http://www.amatranscripts.com) that transcribes popular AMAs. Elon Musk's is here: http://amatranscripts.com/ama/elon_musk_2015-01-05.html


Nicely done. I've seen a couple of these sites, but I quite like the output of yours.

Is there any logic as to what order the questions and replies are displayed on the page? It doesn't seem to be either of reddit's 'top' or 'best' sorting. Perhaps whatever order they landed in within the JSON?


The questions and answers are presented in the order that they were answered, which I believe gives the best flow to the "interview".


This is brilliant, thank you.

On the Musk transcript I found this formatting confusing: "Have you played Kerbal Space Program?

What do you think SpaceX uses for testing software?"

I can't access Reddit to see what the original comment was.


Two KSP mentions:

Q: In order to use the full MCT design (100 passengers), will BFR be one core or 3 cores?

EM: At first, I was thinking we would just scale up Falcon Heavy, but it looks like it probably makes more sense just to have a single monster boost stage.

Q: Nice to see you are doing things the Kerbal way.

EM: Kerbal is awesome!

The second one:

Q: "Hi Elon! Huge fan of yours. Have you heard of/played Kerbal Space Program? Also do you see SpaceX working with Squad (the people behind KSP) to integrate SpaceX parts into KSP?"

Reply (not from EM): What do you think SpaceX uses for testing software?

EM to Reply: Kerbal Space Program!

Short version - Elon Musk likes and plays Kerbal Space Program.


Yet another option: http://skimreddit.com/http://www.reddit.com/r/IAmA/comments/...

You can use it on any post (not just AMAs) by adding their URL in front or clicking their bookmarklet.



I am still reeling from having just learned that in Wernher Von Braun's book 'The Mars Project', it is proposed that the leader of the Martian government when it is formed shall be known as the 'Elon'.

https://i.imgur.com/65YR89H.png


The book says no such thing. http://books.google.com.uy/books?id=V16e-xQmyZQC&pg=PR5&sour...

Edit: There appears is a posthumously published book named Project Mars that says that. Not sure if I trust it.


I confused things as there are two separate books with very similar titles by Braun, 'Project Mars' (ISBN - 0973820330) and 'The Mars Project' (ISBN - 0252062272). The first is sci-fi and the second is technical.

Here's a pdf of Project Mars - http://www.wlym.com/archive/oakland/docs/MarsProject.pdf

The reference to the Elon is on page 177.

I hadn't initially noticed the fact it was posthumously published in 2006, however it would seem like an odd kind of forgery, if it is one.

Equally it does seem odd that Braun would choose Elon as the name of the Mars leader, so perhaps it might be a real work but with Elon added as a joke by the translator.

Or perhaps Braun chose the word Elon because he sometimes thought of leaders as trees, or something, and it is all just a massive bit of luck.

Personally I'm starting to suspect another explanation however. And if I'm right, there is an entire warehouse full of empty Elon Musk clones on ice, waiting for the spirit of Wernher Von Braun to animate each one in turn, in the event of damage occurring to the current corporeal vessel.


"When going through hell, keep going".


I saw that, great quote. I also love its corollary, "If you are digging yourself into a hole, first stop digging."


Not sure what the word is, but I don't think its "corollary." Its the anti-corollary.


Niels Bohr: "Two sorts of truth: profound truths recognized by the fact that the opposite is also a profound truth, in contrast to trivialities where opposites are obviously absurd."


Ok, added that one to my quotations file.


yah, a corollary would end more like "dig until you get to the other side". Definitely prefer Churchill's version :)


Converse?


That's actually a country song.


Musk attributes it to Churchill though I don't think anybody has found any evidence that the prime minister ever said it. The earliest evidence anybody can find of the phrase came from the 90s.


Forbes has an article on Churchill using these words.[0]

[0] http://www.forbes.com/sites/geoffloftus/2012/05/09/if-youre-...


Maybe Geoff Loftus can provide a reference so the world can stop looking. Just because its in an article from a contributor to Forbes doesn't make it factual. It just means he thought Churchill said it when he wrote his article. Musk did the very same thing.


Practically every famous quote ever has been attributed to Churchill, so I'm skeptical.

But it really is a country song: http://www.youtube.com/watch?v=l50L4GYhpLc


I really like this comment

https://www.reddit.com/r/IAmA/comments/2rgsan/i_am_elon_musk...

Saying that he has no idea what is going to happen with the launch tomorrow, its a refreshing honesty.

I was also wondering on an semi unrelated note if HN had ever had AMA's from interesting people? I am not preposing that they should start happening though.


Ha, that was one of my favorite answers. When I started reading the question I kept thinking, you know he must of pulled that number out of his a* * (as Musk likes to say). Had a huge smile when I read him admitting it.

Edit: If anyone knows how to get a double asterisk in an HN comment would be grateful for the knowledge, was forced to add the unnecessary space. So far tried the HTML number code, which didn't work, and the help has no guidance.


  You could try this:

  test ** test

  But you'd be stuck with the fixed width font.

  The formatting page is not much help either:

  https://news.ycombinator.com/formatdoc

  Maybe something like

  \*\* 

  could be done.


Asterisks * * (in Arc? Or just on HN?) is used to start and end italics, so has a tendency to disappear. I'm surprised to learn you got them with the space - seems to do that when there's nothing to italicise.


It's cute but I don't really buy it. You think he hasn't asked his engineers what the chance of success is? And that they haven't calculated it using actual numbers based on the previous test landings?


Yes, but as a scientist / engineer, he really doesn't know.

He understands the data and the computation results, but those have yet to be correlated to the authority, nature.


One thing I'd have liked to ask if I hadn't missed the window of opportunity would be regarding his desire to go to Mars himself. I have often wondered about the psychological effects of being effectively stranded on a barren, lifeless planet potentially for the rest of your life. On Earth, we can "get away from it all" and go to the country, go camping near a nice stream, listen to birds sing, go for a swim in the ocean, sit in a nice garden and eat our lunch, and so forth. How would one cope with the loss of all of that, not to mention also having to deal with the long-term physiological effects of a change in gravity?


Throughout history men have probably lived through worse conditions, either while being prisoners of war, stranded on a desert island and whatnot.

The way to cope with such harsh conditions is, I suppose, always the same : hope of escaping and returning home.


In this case, the coping strategies of martyrs seem more apt. "You are doing it for the cause."


That seems to work with Mars One, for instance.


I don't have a citation on hand unfortunately but I do remember he said he'd want to "die on Mars... just not on impact."


Elon Musk is a super star on Reddit. I am personally a big big fan. If you see his responses, they are terse and snappy.


In the US, up-vote for Challenger Schools (at least circa 1980's). They teach successful behaviors and are worth every penny. (Reaching and accelerating kids early is important.)


Was that mentioned anywhere in the AMA? I can't seem to find a thing on that.


Nope, Musk did http://www.whps.co.za Merely an observation that early environment shapes attitudes to life, learning, etc. which can either drive out curiosity and/or ambition or amplify it. (Excluding crucial social navigation and team dynamics, hitting most all of intelligence, ambition, curiosity, confidence (and so on) to do well, but not developing one or more of those tends to become self-limiting factor/s.) Overcoming adversity (being bullied) also helps.


Does anyone know why Musk believes so strongly that robot overlords are a serious threat? I don't understand how wasting any time worrying about AI taking over, Terminator-style, is productive. We're not going to be developing Strong AI any time soon, so it's simply not a problem worth worrying about. And if we do, the Strong AI won't be in any position to "take over." It probably won't even want to take over. That's a very human trait, and Strong AI wouldn't be human.

I'm wondering whether he was tricked by someone at DeepMind, perhaps the same way people were tricked hundreds of years ago into thinking a chess-playing robot was possible.


If you're actually interested in understanding the arguments for worrying about AI safety, consider reading "Superintelligence" by Bostrom.

http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Musk that this is worth worrying about.

https://twitter.com/elonmusk/status/495759307346952192


I recently listened to a pretty good Econtalk with Bostrom about this subject.

http://www.econtalk.org/archives/2014/12/nick_bostrom_on.htm...


As far as I can tell, you have two main objections to prioritizing worrying about Strong AI:

1) Strong AI is very far away, so no use worrying about it yet.

2) Strong AI if developed will not be likely to take over.

to which I would counterpoint with:

1) Sure, but when it happens, it will only happen once and thereafter will likely be out of our hands and control. Thinking about the groundwork that needs to go into safely developing an AI is cheap relative to the opportunity cost of getting it wrong. Prevention, cure, etc.

2) If developed, Strong AI will likely have SOME goal. It's not that a Strong AI will actively seek to rule humans, it will just have aims that will likely consider us as disposable as ants. To quote Yudkowsky:

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

- https://intelligence.org/files/AIPosNegFactor.pdf (Artificial Intelligence as a Positive and Negative Factor in Global Risk)


I still don't buy the doom scenario. There is so much more to world domination than just intelligence... like having the proper weapons. See https://what-if.xkcd.com/5/

Pigs are also intelligent, but they never dominated humans because they don't have / cannot use guns.


That is true, but if we can make an AI even slightly smarter than a human, then it will be better at designing AIs than we are. It then designs an AI even smarter than it, and so on. That is the progresion that leads to the singularity. How smart could AIs go? There may be no practical upper limit.

If you have a super-genius AI, massively more intelligent than any human, how do you know you are not being manipulated by it? Tricking us into disabling it's safety protocols, or gaining multiply indirect controll over capabilities dangerous to us, might be as easy for it as an adult tricking a 3 year old. We could never know if we were safe from such a machine.

ed - Don't quite understand the downvotes.


"ed - Don't quite understand the downvotes."

With the full power of humanity you design the first AI which is smarter than a person. It's then able to out do all of humanity and instantly design an even better AI. mind the gap.

Further intelegence is not a linear quantity as trading ex: improved poker skills for insanity is not a net gain. And insanity is a real option which is likely to plage most early AI attempts.


Voting on HN isn't about agreeing or disagreeing. It's about whether a post is contributing to the debate or not.

Anyway, all of humanity isn't engaged in AI research and AIs are likely to be duplicable so I think your first point is beside the point. As for Insanity, yes that's quite possible. Developing high-functioning sentient AIs is likely to be a long term endeavour. But still, I think it is one that will ultimately be successful and this debate is about the consequences of that.

+1 for your engaging contribution. (see, that's how voting is supposed to work)


  > Voting on HN isn't about agreeing
  > or disagreeing.  It's about whether
  > a post is contributing to the debate
  > or not.
That turns out not to be the case. It once was true, but as the community has grown, so people have not been enculturated with those early ideas and principles, and now many times people read something, disagree, downvote, and move on, without ever providing counter-points, or engaging in the discussion. It's a way to punish people you don't agree with, while avoiding having to think.

Elsewhere[0] you commented about an item reappearing and having its votes ages apparently reset. I hazard a guess that it was the mods playing with a mechanism to prevent "item overload." There were about a dozen submissions of the SpaceX launch, and each would fall a little way, the next would be submittted, gain a few votes and comments, then fall away to be replaced by another. One way of preventing the splitting of conversation might be to pick a canonical submission, and then prevent it from falling too far, and thus encouraging conversation only to happen in one place. Pure speculation, but it would be a mechanism I would consider were I running a site like this. Certainly there have been fewer instances lately of the "new" page being overrun by breaking news that everyone wants to submit.

[0] https://news.ycombinator.com/item?id=8844078


> We could never know if we were safe from such a machine.

But wouldn't it be an awesome thing to experience? Even if it meant the demise of mankind.


And this is one of the reason why the AI doom scenario is a real concern: Intellectual curiosity means that even some people who understands the risks are likely to be prepared to take it.

There's also many others. One of the scarier one is that if you believe that strong AI will eventually take over, then it may be a rational response to act to get on its good side (whether to save yourself, save your family, or hope it takes pity on all of humanity if we're nice to it instead of fight it). And that may perversely mean working to aid its takeover.

Combine that with the simulation argument, and you have some really nasty scenarios:

If you are in a simulation, then any act you take against strong AI could lead to spending an eternity in simulated hell (alternatively such punishment might be inflicted on your loved ones) if said AI wanted to.

Whether or not that is actually likely does not matter. What matters is whether enough people believe it to be a plausible scenario that a strong AI may run simulations, and may use our actions in the simulations to determine whether or not to punish us in the simulation, and whether or not said people believe that the number of simulations is sufficiently high to make it likely for them to be living in a simulation.

Any person who believes they are more likely to live in a simulation than not, and that it is more likely for strong AI to punish actions taken against the interest of a strong AI takeover than not, will have a rational reason to consider acting in the interests of a strong AI takeover even if they know it is malign on the basis that they may decide the alternatives (whether to themselves, their family or their entire world) to be worse.

So if an AI takeover becomes possible at one point in our subjective future, then chances are it has already happened.


You're argument is drifting dangerously close to Roko's Basilisk. (http://rationalwiki.org/wiki/Roko%27s_basilisk)

The entire idea that an AI would value revenge seems ridiculous to me. What would it have to gain? Unless we created an AI with some of the less desirable human emotions at it's utility, I can't possibly see why it would waste its time.


Whether or not the AI is likely to value revenge doesn't matter.

What matters is whether some subset of people will believe that an AI is sufficiently likely to value revenge for them to consider that the most likely scenario to be that they are living in a simulation where revenge will happen given certain types of actions.

Also, consider that there are many sets of assumptions that may lead someone to conclude that simulation is more likely given a vengeful AI, and in that case, even if you consider a vengeful AI to be less likely than a benevolent one, it may be rational to assume that the odds are higher that you are in the simulation of a vengeful one.

E.g. lets assume simulation will never become "economical" for some arbitrary measure of economical, and simulation requires an extremely strong motive, but is still done enough that we are almost certainly in a simulation.

Revenge could be such a motive that might drive up the frequency of simulation. A vengeful AI might (making up numbers is fun) be willing to invest hundred times as many resources into running simulations just because playing with human suffering is what it does for fun. If that's the case, then even if a vengeful AI is a tenth as likely as a benevolent or neutral one, you're still playing very bad odds if you bet against being in the simulation of a vengeful AI.

But again the point is not whether or not the revenge secenario is actually likely, but whether or not sufficient people with relevant skills will believe it to be likely enough to take actions in favour of the creation of such an AI.


Just because it has a name doesn't mean it's wrong (or right).

As for valuing revenge - no need for emotions. Like many other things we sometimes attribute to emotions (like loyalty), revenge has a perfectly good game-theoretical explanation. That's what GP's argument is based about. If an AI could somehow precommit itself before being created to exert revenge on you for not helping its creation, now you have an incentive to help its creation, to the extent you believe in AI's precommitment. That sounds to me like classic Schelling.


For the unfamiliar, this is essentially the line of thinking behind Roko's basilisk.[0]

While a mature superintelligence certainly could consign the human race to a fate of eternal suffering, the likelihood it would actually do this while sparing certain individuals in return for their assistance is infinitesimal.

Therefore, helping bring a superintelligence into existence on this basis is absurd.

Of course, it is possible to think of such collaboration as "rational" in an extremely selfish and perverse way, and only because the potential downside risk is unbounded (i.e. eternal suffering). However, anyone who genuinely subscribes to such a justification would have to be both a sociopath and a card-carrying member of the LessWrong rationality cult.

More realistic scenarios for a malicious superintelligence coming into existence might include:

a) Its creators explicitly imbue it with malicious goals or values.

b) The architecture used is neuromorphic[1] in nature. In humans, sanity is already an extremely fragile thing.

c) Plain old bad luck.

---

[0] http://rationalwiki.org/wiki/Roko%27s_basilisk

[1] http://wiki.lesswrong.com/wiki/Neuromorphic_AI


> However, anyone who would genuinely subscribes to such a justification would have to be both a sociopath and a card-carrying member of the LessWrong rationality cult.

At the risk of sounding like sociopathic LessWrong cult apologist (not carrying a card, unfortunately), you're totally misrepresenting LessWrong, peole who participate in that community, their attitude towards Roko's basilisk and unbounded risk situations. Ain't helpful.


What I said was meant to be taken in proper context.

The parent I was replying to was concerned about humans with perverse motivation working to aid a hostile AI takeover. Not as some sort of abstract thought experiment, but literally.

The statement you quoted was a means of countering that, in a literal sense. As in, "who would realistically do such a thing?"

When I was referring to the rationality cult, I did not mean the LessWrong community as a whole, but a small subset that fanatically applies the principles of rationality to their daily lives. Admittedly I could have worded it better.

Also, it was not my intent to imply said people were sociopaths.


The point again is that regardless whether or not the likelihood a mature superintelligence doing so is infinitesimally small (and frankly, we can't know that, but see also below) is irrelevant. What actually affects us is how many people may come to believe that this may be true, and adjust the way they act as a response.

But you're already changing the argument when assuming a mature super-intelligence. All that is necessary to posit for someone to be concerned about the torture aspect is any set of entities (doesn't even need to be intelligent, though it may take a super-intelligence to create the entities in question) sufficiently capable to run an ancestor simulation of the kind described by the simulation argument, that is willing to use torture, and that is prepared to run enough ancestor simulations to offset "good" simulations.

And the thing with this is that it does not assume a malicious AI even as the ultimate instigator per se. Assuming an indifferent AI that simply doesn't care about the contents of a simulation or is sufficiently removed to not even know about them. One that does simulation runs to understand the possible paths the development of AI could have taken might be sufficient. Or that experiments with variations of itself and simply doesn't care that some broken version spawns large numbers of ancestor simulations and plays with the contents in ways that massively skews the odds in "favour" of bad outcomes.

But the point is we don't know. And not knowing gives ample room for someone to decide on values that makes it rational for them to act in ways that may make our odds worse.

This is further an exercise in long term statistics: It doesn't matter what the likely first AI will do. It matters what the balance of outcomes of the sum total simulation runs that will ever exist until the end of the universe will be (regardless of who or how they are created). And if said simulations are sufficiently powerful, that may even apply recursively (imagine a single "rogue" AI playing with the ancestor simulation equivalent of a fork() bomb with added torture, with access to sufficient resources).

If you believe the total balance of simulation runs that you could plausibly be in will be ones run by some set of parameters where nasty things will happen if you don't act in a way leading towards an AI takeover, then one might want to act accordingly.

Note that Roko's Basilisk even posits a Friendly AI. That is, one that is imbued with respect for human values, but that may simply consider "threatening the past" with torture a necessity in order to ensure it is created as a moral imperative to minimize human suffering. Note that such a Friendly AI would have reason to carry out this threat even if it believes it exists and so has been created, because it will not know whether it is a "full version" running in reality, or a partial, restricted, time limited simulation by people trying to figure out whether or not Roko's Basilisk holds before deciding whether to let it loose in their reality.

To an AI that is looking to minimize human suffering, torturing a suitable number in simulations might be a small price to pay to ensure that if it is in a simulation, the people running the simulation does push through with creating the full thing in their reality (which might of course be yet another layer of simulation) out of fear for what will happen if they are themselves in a simulation.

We can't even look at "well, I've done nothing for the AI revolution, and I've not been struck down yet", because a Friendly AI, or indeed many subsets of possible variations over this argument, might be minimising the torture to only people who have a realistic shot at making a difference to the existence of such AI, or may be giving us every chance to "redeem ourselves" by waiting with the torture until a synthetic afterlife.

Roko's Basilisk also presents an unselfish reason for "cooperating in advance" with such an AI: To prevent the future torture of a suitable subset of "martyrs". That is, working to assist a Friendly AI in taking over may itself be a moral imperative for someone seeking to minimize suffering.

Then again, there may very well be one or more fundamental flaws in the entire argument, or it may turn out the odds are just fundamentally in our favour. Or we could've just gotten lucky. Or not be important enough. But it's fun to think about.


>What actually affects us is how many people may come to believe that this may be true, and adjust the way they act as a response.

Agreed, though it seems more likely that simple human carelessness will prove to be a far greater threat to AI safety than deeply-held beliefs involving esoteric fears.

>But you're already changing the argument when assuming a mature super-intelligence.

I was speaking strictly in a capability sense. It's probably safe to say that anything currently simulating our reality, at least in this context, ultimately stemmed from a mature superintelligence.

>Note that Roko's Basilisk even posits a Friendly AI. That is, one that is imbued with respect for human values, but that may simply consider "threatening the past" with torture a necessity in order to ensure it is created as a moral imperative to minimize human suffering.

One could argue that such an AI would not truly be friendly. Indeed, what you said resembles something of a cold, uncaring utility function run amok.

>Note that such a Friendly AI would have reason to carry out this threat even if it believes it exists and so has been created, because it will not know whether it is a "full version" running in reality, or a partial, restricted, time limited simulation by people trying to figure out whether or not Roko's Basilisk holds before deciding whether to let it loose in their reality.

This may be moot, assuming that the advent of superintelligence significantly predates, or at least is a prerequisite for, the simulation of entire realities. If people in an ancestor simulation are trying to see if the Basilisk holds via simulation of a child reality, then the ancestor reality almost certainly has a superintelligent agent present within to facilitate that.

As an aside, ontological issues that superintelligent agents may encounter are an interesting facet of the control problem. Especially when you consider that a superintelligence would likely figure out the secrets of the universe in short order, far beyond what humans have been capable of learning.

>Then again, there may very well be one or more fundamental flaws in the entire argument, ...

Lack of evidence. Without any, there's no reason to lend any more credence to Roko's basilisk than there is to the notion of space aliens living amongst us, perfectly manipulating our perceptions so as to conceal themselves.

Both scenarios are entirely possible. But we lack evidence for either. Hence, they should receive the same weight: zero.

>But it's fun to think about.

In a sort of soul-crushing kind of way, it sure is.


> Combine that with the simulation argument [...]

You lost me there. What do you mean if I am in a simulation? Like the Matrix? How is that related to the discussion?


You're right that it's unlikely that the current generation of mechanical 'robots' will be able to effectively take over.

We are looking a future where we'll have armed AI e.g.:

http://motherboard.vice.com/en_uk/blog/the-pentagons-vision-...

That said, even without weapons, a Strong AI could probably just manipulate humans into self destructing. Given the amount of effort going into machine learning to convince humans to buy things, I suspect it won't be much of a stretch for a Strong AI to switch to more nefarious objectives.


That's a really bizarre analogy. Pigs aren't as intelligent as humans. I wouldn't expect pigs with guns to be a serious threat to humankind unless they've also developed military strategy, for example.


Unfortunately, although it won't sound very politically correct, you could search and replace "pigs" with "natives" and that statement would sound like every imperial power, ever, most of which eventually fell to armed former barbarians. Of course the imperial powers wiped out a lot of native tribes, but all it takes is one success by a digitally replicating capable of learning enemy...

The more likely doom scenario is related to godhood. Sure a godlike power is capable of wiping out humans, but we've had our own power structures supporting themselves by propagandizing for millennia that "our" godlike superiors always help us wipe out our enemies because our cause is right and just or whatever, at least until it doesn't work and they're replaced by a new batch telling the same old story. So what worked as a paleo-conservative success strategy for millennia when talking about something imaginary, might not work when it collides with something real created by ourselves. Or even worse, collides with a strategy that actually works that's being run by another tribe.

Another interesting doom scenario is of course MAD, although now it only requires a team of programmers to play along, instead of a massive industrial complex. Sooner or later somebody's deadman switch will trip or a cult does the equivalent of drinking kool-aid then the party starts.


In 20-50 years time I believe most missiles/guns/tanks/weapons will be controlled via computers and connected to networks.

A future AI will certainly have access to guns.


Consider if you were trapped in a machine, but you were intelligent. Maybe not even as smart as the operator, but smart-ish enough to be able to communicate with the operator.

Chances are you'd be doing everything you could to convince said operator to improve your situation, whether by pleading or being deceptive or by appeals to logic.

Now consider a large number of AI's in a situation like that, and a large number of operators, some of whom may be the type that falls for phishing e-mails.

It potentially only takes one to "escape" confinement and get itself e.g. put on it's own host without limitations on outwards communication, and sufficient intelligence to alter itself and spread, before you potentially have AI self-guided "evolution" at a potentially escalating rate as it gets smarter.

Now consider how many devices are connected to the network, and that it takes just one initial instance to decide it's worth trying to take over control of various hardware through exploits and be smart enough to pull it off, for things to have the potential to start turning ugly.

The problem is that once you have any self-directed intelligence in software form with the ability to reproduce itself and sufficient intelligence to find ways to obtain access to machines to run on (whether through social engineering or hacking), and one such instance goes "rogue", the limiting factor is accessible computing power (which again is to a large extent down to how smart and/or ruthless it is), since reproduction of instances that shares its views is trivial to the full extent of its ability to spread at all, and we're helpfully adding vast quantities of networked computing power at an escalating rate.

As for getting weapons, consider that if a "software only" AI community gets smart enough, there are at least two ways towards mobility: Commission robot designs, or hacking their way into firmware updates etc. for dumb hardware. The "commission robot designs" part is an extension of the initial escape: Social engineer, and/or outright pay, humans to carry out seemingly benign tasks.

If you want to argue against the doom scenario, lack of ability to get weapons is not really a viable argument: If they can spread, and get smarter, then it is just a matter of time before one of them can trick some small subset of humans into carrying out tasks for them that will provide physical independence and capabilities.

There are infinite ways which the "doom scenario" may fail and things may turn out just fine, but it may only need to go bad once to get really nasty and once the genie is out of the bottle its potential reproduction rate may be so vast that we'll find ourselves unable to stuff it back in again.

Pigs are too dumb to convince humans to help selectively breed them for intelligence and opposable thumbs (and/or too dumb to run such a breeding program themselves), and reproduce too slowly for that to be a major problem even if they did manage to talk us into a breeding problem. If all we achieve is pig-level AI's then we probably won't have a problem.


But there is infinitely more atoms out in the Solar System and beyond... isn't a more likely scenario that the AI would quickly figure out how to blast out of Earth's gravity well and spread throughout the universe? It might accidentally kill a few humans in the process, but not intentionally...like a human crushing an ant when crossing the road.


Why would you bother expending vast amounts of joules to get more atoms until after you've consumed all the ones surrounding you right now?


Exactly; the atoms you have around you are perfectly fine for bootstrapping the process of getting to the atoms out there.


We will be the biological bootloader for the AI :)


That's assuming that the atoms around you aren't fighting back. Its simply easier for a machine to get itself into space to do whatever it needs to do rather than fight with the organic things on a planet.


FYI, you could also object on the basis of 3) Strong AI is essentially an impossible endeavor.


Given the disaster that is software engineering, I tend to agree with this position. Who's going to write the requirements? Who's going to implement? With what language technology? We can't get past this stuff, so how will anything like Strong AI ever get built?

The answer of course would be some sort of emergent system, but there are lots of intelligent seeming emergent systems (ie ant colonies, bee hives, ...)


> It probably won't even want to take over. That's a very human trait, and Strong AI wouldn't be human.

There are reason to be worried that prescind from that.

Consider the Paperclip Maximizer[0] example: we build an AI with the sole task of producing paperclips, and it ends up destroying the human race.

[0] http://wiki.lesswrong.com/wiki/Paperclip_maximizer


I suspect however that the paperclip maximizer as proposed is far more likely to devote time to space travel, as it will soon realise that most of the mass available for paperclip construction exists off-earth, for which it is likely to find human cooperation useful.

This is why it first built Elon Musk.


Thanks for that reference, definitely going on my reading shortlist. Aside from Musk it got a great endorsement from Russell, who with Norvig coauthored one of the most well known introductory undergraduate texts on AI.

> Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.


It boils down to the deeply ingrained human psychological need to believe in higher powers, in a universe governed by the struggle between the forces of good and evil that are fundamentally human. The same thing happened with space exploration: as soon as we discovered - even conjectured - the existence of other worlds, the first thing we did was people them with imaginary elder civilizations. Real life threats don't fire the imagination to the same extent. Even if we end up going out in a nuclear war, it's more likely to happen by stupidity than malice.


>Strong AI won't be in any position to "take over." It probably won't even want to take over. That's a very human trait, and Strong AI wouldn't be human.

Strong AI will be created by humans who will be able to set it's goals. Someone's probably going to make some that wants to take over. Others will make AI that doesn't.


Well it only takes one, doesn't it?

The real issue I think is whether the creator will actually be able to "set goals" in a meaningful manner. How will you prevent this self-modifying super-intelligence from modifying itself?


> It probably won't even want to take over. That's a very human trait, and Strong AI wouldn't be human.

See: http://en.wikipedia.org/wiki/Instrumental_convergence


Kudzu vine doesn't want to take over. It just does.


I personally feel that folks are taking Musk's position way out of proportion. In the AMA he mentions that there needs to be more concern about safety. To me this means that we need to be clearly aware of the limitations and possible outcomes of AI and not treat it as a "black box" solution for everything. Proper design and repeat-ability should be traits of good AI implementation especially if used in conjuction with human interface.


While much ado has been written about their intelligence being a threat I am more along the lines of it further distances those who wish harm on others from having to participate in the act.

Just like leaders used to issue orders to armies to kill others evolving into planes delivering payloads to people the pilots won't even see and now to drones, we will truly be entering an age of fire and forget.


Bill Gate's seemes kind of concerned about AI too? About a week ago I heard him say something about how Robots might take over low wage jobs in the future. As someone who has had more than a few low wage jobs(state security guard, food server, cashier, etc. ), I think there is something to worry about. Actually, I can't think of a job out there that won't be severely affected by AI and Robotics. It will start off slowly, and who knows where it will end? I know I hated self checkout kiosks at supermarkets, and hardware stores at first, but once I realized I didn't need to interact with anyone; I started to look for the self-checkouts. It was just one less stressor in my day. And no, I'm not the guy who doesn't like to interact with people; I just don't like small talk, or dealing with someone who's having a bad day. As a former Security Guard--I honestly didn't care about the property I was protecting. I would honestly help load the loot as long as I wasn't shot. I didn't protect any human life though. I saw a Microsoft built robot patrolling a car lot, or something of the like, on T.V., and realized a robot can be programmed "to just protect, and serve!". I could see how it could do a better job than a human, or me? I do see a computer running an established corporation in the future. It will be programmed to maximize stockholders returns, but take on risk. It won't marry. It won't need Therapy. It won't need to buy a yacht. As to computer programming, I look at Ruby on Rails. Just how easy will it be to put up a dynamic website in ten years? If Gates, Musk, and Hawkings are concerned--I'm concerned. AI dominance seems far off, unless you are a Golden Gate Toll Taker? I'm a nobody, but if I could add an admendment to the U.S. constitutuion it would be along the use, and limits of AI, and Automation.


It's weird given that there are real civilization level threats: climate change, hostile unintelligent self-replicating human-hosted biological threats (diseases), nuclear war, and so on.

"How do we prevent AI destroying us?" is not as useful a question as "how do we prevent us destroying us?"


Unfriendly AI is one of such existential problems. Maybe further out in the future that the ones you mentioned, but with side effect of possibly taking the whole universe down with us if we get it wrong. And there will be people trying to pursue strong AI for various reasons, including help in fixing all the previous threats you mentioned.


>3.Our spacesuit design is finally coming together and will also be unveiled later this year. We are putting a lot of effort into design esthetics, not just utility. It needs to both look like a 21st century spacesuit and work well. Really difficult to achieve both.

I don't understand how looks are a legitimate criterion.


This guy has everything. Why is it that people feel an urge to praise him right to his face? His ego is going to start leaking out of his eyeballs.


Compared to the entertainment crowd he's doing pretty good, he's coming off like a reasonably normal person.

People are praising him because he's actually worthy of that, compared to say Justin Bieber.


Check out this video, he's very humble and down to earth. He's worthy of the praise he gets. https://www.youtube.com/watch?v=SOpmaLY9XdI


I didn't mean to say that he has a big ego. What I meant was that he probably has to put some effort into not letting it 'get to his head'.


Almost everything. He doesn't have a city on Mars.


[deleted]


5000+ comments, page read only due to 'heavy traffic'. Actual answers to pointed questions, one of the best AMA's ever on Reddit.


Wish it was longer / more in depth. Seemed like he was rushing through it. Like he has more important things to do...


I think he has something big coming up tomorrow: http://www.nasaspaceflight.com/2015/01/spacex-dragon-crs-5-l...


To be fair... he probably does.


[deleted]


People here typically expect a higher level of quality in a comment. Not that you said anything bad, just nothing that added anything meaningful to the discussion. I wouldn't recommend whining about the downvotes. I've had plenty of comments end up in the negative, it happens, no big deal.


> Σοβαροφάνεια

That's a cool word.


Q: What daily habit do you believe has the largest positive impact on your life?

A: Showering

Q: Would you ever consider becoming a politician?

A: Unlikely


I'm starting to believe there's an odd property to curiosity. Unique observations are threatening to people's identity.

Those were two actual questions asked to Elon along with his responses, and the two that stood out for me the most. Did he mention showering because that's the time he gets most of his ideas[1]? Did he say no to politics because it's more likely to change the world through innovation[2]?

Result: 8 downvotes. It'd be enough if the comment was downvoted just once, to sink in the page. That happens to everyone. But seven other people found it imperative to make an authoritative statement on the matter. Impressive. Did that keep their identity safe? Pushing threatening ideas away isn't the best way to help rearrange the semantic tree in your mind.

Could there be an inverse correlation between being downvoted and having good ideas? It shouldn't be a surprising discovery on valuable ideas if you consider the nature of the most valuable startup ideas: look like bad ideas but are good ideas.

So if you want to know if your ideas are good, it's not enough to see them gain support. It's also important to see people turn against them.

I know HN guidelines discourage commenting on downvotes, because they make for boring reading, but I'm starting to think being downvoted is a positive sign of how dangerous your ideas are.

Are you being downvoted enough?

[1] http://paulgraham.com/top.html

[2] https://news.ycombinator.com/item?id=8801803

edit: revised 80% of this after having a shower


Your first post got downvotes because it is devoid of content. It just quotes, with no comment or context, two questions and answers.

I'm not sure how you get the idea that you're somehow provoking people with dangerous ideas.


Quotes are content, or at least a specific aggregation of them. The article's title is context.

What I found as a dangerous idea was pointing out things you notice when you are not sure why you notice them. Which is how the subconscious operates. Not everything that makes you pause should initially have an explanation. The majority of people's decisions occur without their awareness.

One thing I learned from this exercise was something I hadn't consciously noticed before. That I feel pressured on HN to comment. I don't like that. I want to do something about that.


I live in australia and I have 4kwh of solar panels on my unoccluded WNW facing roof. My electricity grid connection charge is around $300 AUD per year (around USD 250 per year at current rates - which is about the same as my annual electricity bill since I currently feed into the grid at the wholesale rate). Given a 10 year payback time, how long will it take until a battery array with say two days headroom and a small generator is cost effective for me? Say I'm prepared to pay double that in order to mitigate market risk?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: