I've heard this sentiment before (that AI != ML, always weirdly hostile) but I've also heard key figures in the AI world say strongly that ML of any kind is a subset of the broader AI umbrella. Are you sure it is strictly wrong to refer to this example as AI? Also why are you using such strong words? I'm genuinely curious why there is so much emotion when people, maybe, misuse these terms.
He's going for a middlebrow dismissal. This is like the "bitcoin isn't money!" or "twitter is not a serious medium for communication!" oldschoolism that some HN traditionalists defend in spite of real world application. At the end it doesn't uncover a salient point but gets stuck on semantics to shut down an entire idea.
You're not 100% wrong, but I'm not trying to shut down an entire idea of AI.
Let me put it this way: It's called AI, Artificial intelligence. Are trees inteligent?
Because they are a clump of cells that manage to achieve amazing results. growing to be some of the largest living things on the planet. Extracting nutrients and transporting them 20 or more meters above ground, where they are used to harvest energy from the sun, all the while producing offspring every year and fighting off predators.
I don't think many people would call them "natural intelligence".
The I admit that I maybe apply a narrow definition of "intelligence", but I think the core concept is one of "understanding".
And we are not even close to al ML algo actually understanding anything.
And this is the problem. It masks the inherent shortcommings of ML.
People are delivered the impression that applications that use ML actually do what they are expected to do, because like a person, you train them and then they understand their job and do it.
This is NOT HOW THAT WORKS. The ML algo does not understand that it is asked to identify oncoming traffic. It does not know that it is looking a cancer cells that will soon kill someone.
And even worse, we humans who make these things, are 100% unable to understand the models we create. We can feed them data, and compare the result. But that is it. There is no real way to understand how it works in detail.
They get used anyway. With predictable results. see Tesla Autopilot for a prominent example.
Also, I call them AI researchers for the same reason I call Nuclear Fusion researchers that, not because they are doing it, but because they are researching it.
I'm not the OP, but have been a data person long enough to hazard a guess.
If your job is an engineer or scientist, the term "AI" is basically a synonym for "unrealistic executive expectations." That's a super triggering and stressful situation to be in, especially early career.
I've actually had a CEO describe expected output of my ML team as "magic AI shit" - I bet you can imagine the team's reaction and tone. I'm reading the same strong emotions and frustration here.
The good news is you can always course-correct expectations with communication. I've come to love talking about AI with people who are only somewhat technical, because their wildest dreams are sometimes totally do-able with some duct tape and fine-tuning.