Hacker News new | past | comments | ask | show | jobs | submit | more tangled_zans's comments login

eyeroll


I'm totally serious


I'm surprised by people saying "well how could AirBNB end racism?!"

It's not like they have access to all the data from who the hosts accept and reject and the profile pictures of each.


I know right. It was obvious from the start that he's full of shit, how are people still arguing about this?


Very wisely said.

People in AI try to replicate the "power of the human brain" when there are billions of "human brains" that are surviving at a subsistence level or working in mind-numbing menial tasks.

I can't understand the cognitive dissonance that makes a person marvel at how "amazing the brain is" while simultaneously ignore the suffering of billions of those brains.


>I can't understand the cognitive dissonance that makes a person marvel at how "amazing the brain is" while simultaneously ignore the suffering of billions of those brains.

I can.

Humans are good at coming up with brilliant ideas (such as, say, the concept of translation). But they are absolutely poor at executing them effectively and "at scale" (translating arbitrary works from one language to another). So "AI advocates" can talk about how amazing the brain is in coming up with ideas (as if ideas were all that were needed), but what they really want are mechanical brutes that are able to execute those ideas quickly and effectively.

At least some people hope that the proliferation of AI labor could mean a reduction of human labor, potentially reducing the "suffering of billions of those brains". This, of course, hinges on whether if the gains of productivity can get redistributed fairly (or if they just accumulate to those who already have capital). And then, there's all the social turmoil that occurs during the transition phase: humans may not want to be obsolete, humans may actually like working, AI accidentally becoming an 'exisitenal threat' due to human error, etc. The brains will suffer more in the short-term, in the faint hope that they will suffer less in the future.

EDIT: "The vast majority of automation tasks don't require advanced AI. The vast majority of human work can be removed without recourse to AI."

I would argue that when you get to the point when we have automation tasks and human work unnecessary, that we already have AI. We may never reach the stage of Strong AI because it turns out that what we actually do is not intelligent enough to require Strong AI.


I like your analysis, thanks.

> We may never reach the stage of Strong AI because it turns out that what we actually do is not intelligent enough to require Strong AI. That's interesting. If we find a rigorous enough definition of "Strong Intelligence", would humans necessarily qualify as one?


I'm not sure. That's why we need to come up with a good, rigorous definition that doesn't elevates humanity, but is instead an objective, reasonable definition of intelligence that humans can agree upon. I'm doubtful that we can ever find that definition though.

Right now, humans consider intelligence to be "whatever machines haven't done yet" (Tesler's Theorem), but as machine capabilities increase, then there is a real possibility that humans may believe that intelligence doesn't exist at all (after all, if machines can do everything, and if machines are not intelligent, then nothing requires intelligence). [Source: https://plus.google.com/100656786406473859284/posts/Yp83aFwF...]

I do think that intelligence does actually exist and that current AI can already do intelligent things, but that the stuff that current AI can do won't match my vague understanding of the term "strong". If current trends continue indefinitely, then, of course, we won't ever have Strong AI, but we still have machines that do everything. At least, that's one possible way of thinking about intelligence.

But that's the thing, we don't have a good definition of intelligence at all (and I don't have one either) so we don't really know what's going on. We could invent Strong AI and never even recognize it, and maybe even dismiss it because it doesn't resemble what we think of as intelligence (much less "strong intelligence"). There's just so much that we don't know that talking about it is very difficult. AI is not just a field where you get to write pretty algorithms. It is also a philosophical field, and it is a shame that the philosophical and the practical aspects of AI are disconnected.


What I think is the crucial missing component is: how does your intelligent system define goals?

Right now goal-setting is something intelligences do not, and cannot, do for themselves. Humans must define the bounds of a problem carefully before a robot brain can perform useful work (some kind of numerical optimization).

The preliminary problem, then, is: how do humans define goals?

And the final problem: construct an intelligence that is able to efficiently set and achieve goals that are broadly in line with human goals.

I think this statement of the problems neatly sums up my difficulty with the notion of "strong AI", or "AGI", or Robot God or what-have-you and the possibility that it might be somehow useful in the world.

Because the way humans set goals, I think, is through vague heuristics that are represented as narratives carried by culture and society; we hold these narratives and pass them back and forth to each other, through various tongues and modes of fashion.

This means that human desire is the product of a constantly-shifting stream of socialization, which we are all drinking from and pissing into at once. The only meaningful way to accurately represent this, I think, is for engagement in it. You must participate in culture to "get it". Where this participation breaks down ("let them eat cake") we get strife.

Where does this leave the poor robot mind? It can only be "intelligent" in the way that we want when it can appreciate the horror of losing its daughter to a prison camp, when it can come to feel the memory of an inherited tragedy as both burr and serious weight. At this point we're just raising children again.

At any other point it's simply a dumb slave, doing exactly what we tell it - or a capricious, self-serving monster to be fought.


And since we don't know how humans decide their own goals (because knowing that would be a very revolutionary discovery that would immediately be used in a variety of other fields, including politics and advertising), we can never really establish a road map to building "strong AI"/"AGI"/"Robot Gods" (or even recognizing if we have built one by sheer accident). Clever. I like that.

There are probably ways to "cheat" your criteria though by having AI simulate the idea of discovering goals and acting on them, such as building a bot that searches Tweets on Twitter and then writing Tweets based on those Tweets it discovers. But these are "cheats" and won't be universally accepted. We could argue for instance that this bot really has a higher-level goal of finding new goals and carrying them out, and is only coming up with "lower-level" goals based on its initial "higher-level" goal. So, again, you're probably right. We don't know how to have AI create goals on its own...we can only give it to them.

I would say that "dumb slave[s]" or "capricious, self-serving monster[s]" are still threats to worry about though. Just because robots do what we tell them to doesn't mean that they will do what we want them to. Bugs and edge cases can exist in any exist system, and the more complex the system, the more likely it is for those bugs and edge cases to slip by unnoticed. These bugs/edge cases could lead to pretty catastrophic results. Managing complexity when programming AI would be a good place for "AI Advocates" to focus on.


Well said. +1 for mentioning Aubrey.


I wasn't sure what to make of riprowan's above accusation that /r/bitcoin has become a toxic environment, but you've pretty much confirmed it.

Sorry for your loss.


Does anyone have data for how many companies from the previous batches have survived so far and what their valuation is?


Nice! You got a link?


Not ready yet ;) but I'll be posting it when it is!


Wow, thanks for sharing! I did look at CoreNLP before and the fact that it's in Java has put me off, so this will be useful :)


You're welcome, I'd love to hear what you end up building.


Hey! I was replying to your comment on my other thread and then it disappeared :P You sound like you're going through a lot of the same stuff that I am. I'm 27 also and my mind is a maze of incomprehensible things. Let me know if you want to chat about it, my e-mail is in my profile. Might be something that we can both share insights on!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: