Hacker News new | past | comments | ask | show | jobs | submit login

My prediction is that no matter what limitations you add, some people will always reject the notion that a computer can be competitive with a human at anything that humans are perceived or expected to be uniquely good at. In chess, there were decades of resistance to the idea that computers were truly competitive the best human players in a "fair" match, even though that game has little or no physical component (I suspect this persists to this day).

For some people, either there will be endless litigation of every tiny physical difference between the computer and human player that makes it "unfair," or the premise will just be abandoned and we'll hear things like "well, yeah, computers are great at RTS games, but RTS skill isn't really a sign of true intelligence."




As the objector above, I feel the need to defend myself despite the lack of an attack :)

I think you are correct. There will always be those people.

I hope I don't fall among them - I raise the distinction about being "like" a human not because I think it's makes a good/bad qualitative difference, but because I'm hoping to avoid such comparisons. To me, far from proving it is "good" or "bad" at the game, AlphaStar is interesting for the behaviors it uncovers that are useful to humans. (example: AlphaStar overloaded workers - a strategy that had been discarded by almost all pro years before, and is now enjoying a reevaluation as a result). Paying attention to how the I/O is different matters to such elements, even if it is pointless in the comparison of "true intelligence".

FWIW, when it comes to AI I have a more Minsky-view of things (in my limited understanding) and think that we're comparing apples and oranges without any awareness that its all fruit - we only KNOW apples. I think AlphaStar already has a better understanding of RTS than I do (low bar), even if we ignored the differences. That, however, isn't terribly exciting. AlphaStar showing us new tricks we can use - THAT'S interesting. (And now I want a segmented apple, dangit)


I think philosophically the idea of humans competing with computers comes down to balancing two sometimes opposing things: A) which types of skills are interesting to test, and B) which types of skills are inherently interesting for humans to have.

For A, this comes up a lot in discussions about video game design and balance. Do we really want to be testing how good players are at detecting cloaked units or exactly counting groups of units in battles? I tend to think those aren't super interesting strategically, tactically, or mechanically.

For B, there's a reason that human competitive weightlifting or sprinting is still interesting, even though everyone knows that machines could trivially win those competitions. Of course, those aren't really tasks that are considered primarily measures of intelligence (although, see Moravec's paradox). It's damn cool to see the limits of human ability stretched.

Of course, questions about expensive gear, performance-enhancing drugs, and even prosthetics and cybernetics can already challenge our philosophy of what makes human competition "fair." We inherently want to test the inequalities of humans, both we're only interested in certain inequalities. Generally, we're interested in who can lift the most weight, not in who could take the most growth hormones without dying.


People often pick something easily categorised for these comparisons, which seem to often be tasks ideally suited for the current direction of automation. A computer will never beat a human at chess. Go. Starcraft. Super Mario. Something else with a clear and simple set of inputs and outputs, and a fairly easy "goodness" measure.

Simply thinking out loud, take a human player skilled in chess and Deepmind's chess player, and ask them both if we could teach a badger to drive a school bus, and what changes we'd need to make to the bus and the badger and the system around it; facetious as I'm being, this is the kind of situation in which I don't see AI making any qualitative inroads, which humans remain good at - massively out-of-context problems.

As an aside, on the sbject of "true intelligence" or "general intelligence"; I'm not convinced there's any such thing, and if there is, I'm even less convinced that humans have it.


AlphaStar might have played 200 years of StarCraft, but we have been playing the general intelligence game for 50,000+ years.


And we suck at it! :)


time for me to insert my pet theory:

consciousness (and the higher-level awareness that feeds general intelligence) costs calories. Thus, we are evolved to MINIMIZE THE NEED. We like to think of ourselves as self-aware, and we can be...but most of the time we're in a lower state. When this near-lizard-brain state runs into something it doesn't have a preprogrammed response to, consciousness is engaged. We figure out how to respond to such situations...and we shut down again.

Once we learn to break this cycle, a great many wonders and horrors will be unleashed as we wrestle with what to do with a larger quantity of time being AWARE.

I myself am terrified that I don't get bored the way I used to as a kid. I assume this is because my lower-awareness self has plenty of pre-programmed tasks to manage and my higher awareness just doesn't get activated nearly as much.


Did you read the defense of boredom article that hit the homepage a couple days ago?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: