The superhuman fallacy is the nemesis of all AI research: if it isn't better than the best human who has ever attempted a problem, it is worthless and derisory.
I've done a lot of work on artificial creativity, and am constantly thrilled when code generates something at the level of a creative 3rd grader. But show it to most people and you get "It's hardly a Michaelangelo, is it?"
Sounds less like a fallacy and more like a strawman argument by certain types of AI proponents. First they claim that efficiency is the only thing that matters in intelligence. Then they fail to demonstrate high enough efficiency in real-life applications and resort to rhetoric to explain it all away.
> they claim that efficiency is the only thing that matters in intelligence
Who, and where, I've not come across such a claim.
There is, of course, a question of whether a particular AI tool demonstrates value for money in a particular commercial niche. But not being so doesn't mean it isn't 'real' AI or 'real' intelligence. There are plenty of situations you wouldn't be value for money for me to invest in, I don't take this as meaning you aren't intelligent. It's a quite different question.
No, it's rather the idea that you cannot say "AI beats Go master" or "AI masters Go" if it didn't actually attain some sort of high ranking by itself. Beating a low-ranked player while still interesting is not necessarily a proof of proficiency but could be due to just blind luck for example.
There's no clear definition of "mastery." Honinbō Shusaku was a master. Honinbō Shūsai, Go Seigen, and Minoru Kitani would also be considered masters. As time marches on, average player skill increases. While Honinbō Shusaku was among the strongest players of his day, he would be hard pressed to hold his own against professionals in today's era.
I think it's fair to say that anyone who reaches shodan (1 dan professional) plays at the master level. The difference between 1 dan professional (1P) and 9 dan professional (9P) is three stones, or roughly 30 points. In amateur play, for comparison, the difference between a 1 dan and 3 dan is about 2 stones (roughly 20 points).
AlphaGo won 5 straight games against a 2 dan professional player. That puts AlphaGo around 3P, well into the master range.
In Go, the ranks are:
30 kyu amateur (never played)
1 kyu amateur (understands the game)
1 dan amateur (mastered the basics)
7 dan amateur (nearly professional strength)
shodan (1 dan professional)
9 dan (top ranking professional)
"Traditionally it has been uncommon for a low professional dan to beat some of the highest pro dans. But since the late-90s it has slowly become more common."
There are about 1500 chess players with the title "Grandmaster", and a whole lot more for various other "Master" title.
That is to say that a player ranking within the top 1000 of a game like Go is a master of the game. A human player will take 10 years and way north of 10000 hours to get to where AlphaGo is.
It did attain some high proficiency. It both beat a Go master, and played at a master level (where 'master' = professional dan rating). It wasn't assessed for a ranking, but beat a 2p-dan player 5-0. That's not a 'low ranked player' - I suspect you don't know much about Go.
We'll see in March how it does against a 9p.
Your comment was a perfect example of the superhuman fallacy - where a 3p+ AI becomes 'low ranked'.
Indeed, if this software beat me soundly (and it would since I don't play Go), could I still claim that the AI hasn't mastered the game, because a handful of humans are better than it?
I guess AI has to purge human out from earth to justify the statement "AI masters xxxx"...