Having read the article (though with relative attention), I do not see how it could be categorized as «rambling nonsense» - it seems to be a very interesting page of history. You will have to be more detailed.
Then,
It is getting perplexing and a suggestion of careless bad habit to read 'AI' there where _'statistical language models'_ seems to be the right term.
"Artificial Intelligence" is the "problem solver that can replace a professional", and there is no need for encouraging disparaging tints on the whole, and there are good reasons to differentiate "AI proper", made to obtain reliable outputs, from the attempts around it.
What's really perplexing is how you passed off an entirely invented meaning for the term AI so authoritatively...
You're definitely in a minority if you really think LMs don't qualify as
> the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages
I am afraid the term 'Language Model' does not fit the category; 'Statistical Language Model' is required to define (after interpretation) what they do. Not all Language Models need to be based on relational frequencies of term occurrence.
(Hopefully, some will be even based on mechanisms of Intelligence - the opposite.)
> «the theory and development [...]»
If you wanted to quote John McCarthy (project proposal for the Dartmouth summer seminar, 1955), the actual words were «An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves».
I reckon the letter of what you quoted seems to be a "merriamwebsterism" - inherently with limited authority, because dictionaries typically express use, not meaning.
> qualify
The output of text must be an intelligent composition to fit into the context of a «task» that «require[s] human intelligence», as per the definition you proposed - because you do not need «human intelligence» to delirate (as in, outputting text not filtered by intelligence).
> minority
If that ever were an indicator, it would suggest the possibility of a promising stance, given that in Paretian distributions (what you actually find outside of selected groups) the function of the majority is poor.
An interview with Yann LeCun came out yesterday: he defined (S)LMs along those lines. Maybe discuss the matter with him. Does YLC represent an authoritative part enough for you?
This is HN - you MUST question any article painting Google in a good light and SHOULD question articles that portray any other company well but MAY just ignore them.
Any other text such as definitions for the term AI is valid as long as it complies with RFC101HN.