Hacker News new | past | comments | ask | show | jobs | submit login

If this George Dvorsky fellow were a software developer, like me, he would be smart enough to know that technology advances exponentially, in all fields, all the time. If you want proof, just compare today's Google Docs to Wordperfect. It's like, a million times better. Anyway, that means that all technological problems will be solved sometime in the next few decades. We probably won't even need to do it ourselves: now that we have AI that can tell the difference between a bear and a mouse after the millionth try, we basically have everything we need to build a super-intelligence that can solve every problem for us.



I disagree and actually think humans are very prone to overestimate how quickly certain technologies will develop. A good example is self-driving cars, which are still years – if not decades – away from being truly production-ready. I'm not even close to being an artificial intelligence expert, but I know its current capabilities are extremely limited and we are nowhere close to having a general AI; we don't even see a feasible theoretical path to get there from where we are today. Getting a computer to conduct abstract reasoning is very hard.


Well that's obviously wrong because some people used to think that a computer would never beat a human at chess. It follows logically that no one can ever overestimate how quickly technology will develop.


The form of your argument is that some people were wrong about a subset of X, therefore the opposite attitude of those people must be true for all of X. That's clearly not a valid argument.


And Bill Gates said that 640K should be enough for anybody.


"If you want proof, just compare today's Google Docs to Wordperfect. It's like, a million times better." Really? There's collaborative editing, but in the end it's still just a writing tool. It's not like Google Docs can write my progress report for me.

"now that we have AI that can tell the difference between a bear and a mouse after the millionth try, we basically have everything we need to build a super-intelligence that can solve every problem for us." So we have machine learning that can do decent image recognition if you feed it massive dataset. How does that get to general AI? What, exactly, is the dataset I'm feeding a neural network to…live on Mars?


Machine learning is exactly like human intelligence, only less powerful. We know this because GPT-2 can't count, and babies can't count either. It follows that all we have to do is use more powerful hardware, and then we'll have general intelligence, which is the same as super-intelligence.

Intelligence is the only thing necessary to accomplish any task. We left the the so-called laws of Physics behind at around the turn of the 20th century. Medieval peasants would call cell phones impossible, so it logically follows that actually nothing is impossible.


Sarcasm, or just trolling?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: