Hacker News new | past | comments | ask | show | jobs | submit login

> So then why can our brains do it?

Because current NNs only simulate, like, less than 1 mm^3 of brain matter. Someone writing lyrics for a song has millions of such tiny networks working concurrently in their brain - and then there are higher-level networks supervising and aggregating the smaller nets, and so on.

Current AI NN architectures are flat and have no high-level structure. There's no hierarchy. There's no plurisemantic context spanning large time intervals and logic trees. No workable memory organized short-, mid- and long-term. Etc, etc, etc.

We're not even scratching the surface yet.




Further, the brain uses subsystems whose architectures are good at the specific problem their solving with smooth integration that somehow often preserves context. The approach of current ANN's seems to be equivalent of taking visual cortext or other single subsystem then trying to apply it to every other subsystem's area of expertise. It will do interesting things but always fall short since it's architecturally not the right tool for the job.

It's why my money is on the humans for the upcoming Starcraft challenge. Like poker, bluffing is integral to the process. AI's have been having a really hard time with that in poker except for constrained games. Starcraft provides enough open-ended opportunities that the AI will get smashed by clever human. Hip hop similarly combines tons of references to pop culture, psychology of what goes with what, coordination of presentation, and established musical tricks. At least half of them ANN's suck at by design.


There are two ways the AI could win at Starcraft:

1. The game does not really depend on a larger context. What you see is what you get. The "muscle memory" of a relatively simple ANI could therefore be enough. This is partially in contradiction with what you said above about bluffing, but I feel the contradiction is less than 50%.

2. Simple "muscle memory" strategies should not be enough to win the game, but the ANI's lightning speed reactions and its ability to see the whole game at once are enough to outperform more sophisticated-thinking humans who are slower and have tunnel vision w.r.t. the game. Basically the brute-force approach.

I'm not placing bets, and I'm as curious as everyone else as to the result of the contest. I'm just saying - if the AI does win, these are the ways it could do that.

I'm using the expression "muscle memory", which is inadequate, because I have no better way to express how current NNs operate. They are dumb at the higher semantic levels. They only become powerful through colossal repetition and reinforcement.

Watching current NNs being trained never fails to give me flashbacks from my college days when I was practicing karate. We would go to the dojo, pick a technique, and then repeat it an enormous number of times, to let the sequence sink into muscle memory. I'm sure I still have some (natural) NNs in my brain that still have got that thing down pat - I don't have to think about doing the techniques, they just "execute" on their own. But there's no semantic level here, it's just dumb (but blazing fast) automation.


It's possible but it's a lot more context-driven than you think with a human involved. The bots do great against each other with their strategies and muscle-memory stuff. Throw a human in the mix, they start noticing patterns in how the units are managed, what strategies are likely, etc. They've exploited these in the competitions to hilarious effect. Here's the main site on prior work & results at the competitions:

https://www.cs.mun.ca/~dchurchill/starcraftaicomp/reports.sh...

Here's a few examples of how the muscle-memory approach, esp if focused on unit vs unit, can fail against humans.

" In this example, Bakuryu (human) notices that Skynet's units will chase his zerglings if they are near, and proceeds to run around Skynet's base to distract Skynet long enough so that Bakuryu can make fying units to come attack Skynet's base. This type of behaviour is incredibly hard to detect in a bot, since it requires knowledge of the larger context of the game which may only have consequences 5 or more minutes in the future. " (2013)

Note: At this point, they also sucked at building and expansion strategies which surprised me since I thought a basic planner would be able to do that. The constraints between rate of expansion, where to place stuff, what units to keep/build, and so on get really hard. The other thing they weren't good at was switching strategies mid-game based on what opponents are doing.

" despite Djem5 (pro) making the bots look silly this year... they were able to defeat D-ranked, and even some C-ranked players. After the human players have played one or two games against the bots they are then easily able to detect and exploit small mistakes that the bots make in order to easily win the majority of games... " (2015)

I don't have the 2016 result yet. It's clear they're getting better but there's a huge leap between bots and humans. Gap seems to be context, reading opponents, and expansion. Now, if they can fix those, they have a chance of taking down pro's combined with the machines inherent strength in micromanagement and muscle-memory on specific attack/defense patterns.

Examples below of the AI's managing units in perfect formation & reaction below. It's like an emergent ballet or something. The second one could be terrifying if they can get it into real-world military. Figured you might enjoy these given your position. :)

https://www.youtube.com/watch?v=DXUOWXidcY0

https://www.youtube.com/watch?v=IKVFZ28ybQs


I'm pretty sure a NN AI could beat most players in Starcraft. Starcraft is actually pretty straight forward and the meta hasnt changed much for a while, which means the NN will have tons of training data. By seeing the revealed map and learning to send in a scout a few times, the AI could be frightening.

Harassing also is straight forward and high-level bluffs are relatively hard to pull off in Starcraft (you need to aim at mineral line for example), so out-of-ordinary experience are rare.


The training data would give them a considerable advantage against non-experts. However, the human pro's have managed to bluff them or exploit their patterns in every competition to date. You might be underestimating the risk of bluffs or just odd behavior to the future competition. I hope they figure out how to nail it down as it's critical for AI in general. They just haven't yet.

The other angle is that humans got this good with way less training data and personal exploration. Success with training on all available data would mean AI's could solve problems like this only with massive, accurate hindsight. Problems we do in the real-world often require foresight, too, either regularly or in high-impact, rare scenarios. We'd still be on top in terms of results vs training time even if humanity takes a loss in the competition. :)


>The other angle is that humans got this good with way less training data and personal exploration.

I think you are underestimating how much training goes into a human. I would not mind seeing how a new born baby does against one of these AIs.


I already said there was a huge set of unstructured and structured data fed into the brain over one to two decades before usefulness. The difference is brain architecture doesnt require an insane number of the exact thing you want it to do. It extrapolates with existing data with small training set. Further, it shows somd common sense and adaptation in how it handles weird stuff.

Try doing that with existing schemes. The data set within their constraints would dwarf what a brain takes with less results.


The comparison is contaminated by the test being preselected for a skill that humans are good at (i.e. the game has been designed to be within the human skill range of someone from a modern society).

I am sure you could design a game around the strengths of modern AI’s that no human could ever win. What would this tell us?


I have no idea. Humans are optimized to win at the real world against each other and all other species. That's a non-ideal environment, too. Designing an AI to win in a world or environment optimized for them might be interesting in some way. Just doubt it would matter for practical applications in a messy world inhabited by humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: