Hacker News new | past | comments | ask | show | jobs | submit login

I'm pretty sure a NN AI could beat most players in Starcraft. Starcraft is actually pretty straight forward and the meta hasnt changed much for a while, which means the NN will have tons of training data. By seeing the revealed map and learning to send in a scout a few times, the AI could be frightening.

Harassing also is straight forward and high-level bluffs are relatively hard to pull off in Starcraft (you need to aim at mineral line for example), so out-of-ordinary experience are rare.




The training data would give them a considerable advantage against non-experts. However, the human pro's have managed to bluff them or exploit their patterns in every competition to date. You might be underestimating the risk of bluffs or just odd behavior to the future competition. I hope they figure out how to nail it down as it's critical for AI in general. They just haven't yet.

The other angle is that humans got this good with way less training data and personal exploration. Success with training on all available data would mean AI's could solve problems like this only with massive, accurate hindsight. Problems we do in the real-world often require foresight, too, either regularly or in high-impact, rare scenarios. We'd still be on top in terms of results vs training time even if humanity takes a loss in the competition. :)


>The other angle is that humans got this good with way less training data and personal exploration.

I think you are underestimating how much training goes into a human. I would not mind seeing how a new born baby does against one of these AIs.


I already said there was a huge set of unstructured and structured data fed into the brain over one to two decades before usefulness. The difference is brain architecture doesnt require an insane number of the exact thing you want it to do. It extrapolates with existing data with small training set. Further, it shows somd common sense and adaptation in how it handles weird stuff.

Try doing that with existing schemes. The data set within their constraints would dwarf what a brain takes with less results.


The comparison is contaminated by the test being preselected for a skill that humans are good at (i.e. the game has been designed to be within the human skill range of someone from a modern society).

I am sure you could design a game around the strengths of modern AI’s that no human could ever win. What would this tell us?


I have no idea. Humans are optimized to win at the real world against each other and all other species. That's a non-ideal environment, too. Designing an AI to win in a world or environment optimized for them might be interesting in some way. Just doubt it would matter for practical applications in a messy world inhabited by humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: