That's just another version of the trap GP spoke about. About a decade ago everybody was expecting emergent complex behavior from all kinds of evolutionary, intelligent ("swarm") systems. Didn't happen, seen that.
About a decade ago winning GO or self-driving cars were seen as pipedreams many decades away. Yet here we are.
The author is making the mistake of thinking that just because he can show some areas were we aren't as far as we thought he has made an argument against AI.
Thats not how it works. We don't get to decide what is the right metrics. All we can see is that we keep making progress sometimes large leaps sometimes slow.
I always find it fascinating that we have no problem accepting the idea that human consciousness evolved from basically nothing but the most elementary building blocks of the universe and once we became complex enough we ended up being conscious yet somehow the idea of technology going through the same just in a different media seems to many impossible.
I know where my bet is at least and I haven't seen anything to counter that neither the OP's essay.
The fallacy there is glorifying consciousness. Full consciousness as in omniscence is an unachievable ideal. If we prescribe consciousness to ourselves, depending on the individual theory of conscious thought, that's likely faulty in some respect already.
I don't see anyone glorifying consciousness especially not as some omniscient ideal. In fact I only see people arguing that consciousness isn't really the goal or the focus here but rather that you can't talk with any certainty about whether or not it's possible. You can however point to the fact that we are making progress towards more and more complex relationships and that this looks very much like how we became conscious. Thats all really.
"Never" is a strong prediction. But yes, ANNs have nothing in common with BNNs (biological ... :-)) at all, other than taking them as a very rough abstraction for teaching the basic intuition of the chained up tensor transformations.
The hard thing is to predict the when, or even if, of AI. If it will happen, it will be a sudden, light-switch like moment. I don't think AI can happen gradually. At least the first artificially scentient entity will be a moment much like a singularity some love to predict in the near future...
But as to when that moment will occur, or even if, I think we have no real data that shows we are any closer today than say 10 or 30 years ago. Pattern matching, no matter how complex, isn't "all there is" to intelligence and conciseness.
EDIT: OP changed his reply from "will never happen" to "hasn't happened yet" while I was replying, explaining why mine might read a bit strange now... :-)
Human intelligence at the individual level evolved pretty gradually, but there hasn't been enough time for biology to explain our advancement in the last 10,000 years or 500 years. Culture and social organization are the essential nurturing factors there.
Every human genius would be out foraging for roots, perhaps reinventing the wheel or the lever, if it grew up without the benefit and influence of a society that makes greater achievement possible. Modern science and high technology that we attribute to human intelligence are really the products of a superintelligence (not to be conflated with consciousness) acting through us as appendages.
I think it's entirely possible (even likely) that all of the components of a new computational superintelligence already exist, but they are still "hunting and gathering" in the halls of academia or the stock market or biotech or defense...
https://en.m.wikipedia.org/wiki/Swarm_intelligence