Hacker News new | past | comments | ask | show | jobs | submit login

I think this bubble's a weird one in that it's a very different size depending on your point of view. Everything is getting rebranded as AI. Taking averages, grouped by something? That's AI now. Using algorithms to do different things for different people? That's AI now. At least it will be in your press coverage.

One thing is AI to the press and public, another thing is AI to investors, yet another thing for nontechnical workers, and not even a single cohesive thing to people building it all. Wherever you personally draw your lines between AI and not AI, the boundaries do keep expanding. Does that mean the bubble is growing? There are undoubtably more people doing machine learning, and there are more people doing statistics, and more people solving optimization problems, and each other thing that we call AI, but the "AI" label is growing faster than all that. It's a weird bubble. If it pops, does that mean there will be fewer jobs for people like me, or does it just mean people will just stop calling it AI? Or is this just a word's meaning changing and not a large bubble?

This story comes to mind: http://web.archive.org/web/20190626012618/https://gen.medium...




Well said, AI in practice is just stats rebranded.

Neural networks are shiny and new, but they are just an implementation for solutions from stats that have been around for decades.

Regression? MSE loss. Now with a neural network trained on MSE loss.

Classification? Logistic regression with cross entropy loss.

Anomaly detection? Feature extraction? Plenty of people still use PCA, which is nothing new. Autoencoders may get you more mileage, but conceptually work very similarly to PCA for these use cases.

Image data? Use methods from signal processing, also decades old. Convolutions are nothing new, you're just now implementing them with neural networks, and adding a loss function based on what you're trying to predict.

Time series data? You could be better off just sticking to ARIMA. Depends on your use case, but using RNNs may not even work here.

Reinforcement learning is more exciting, and is solving new problems that weren't even being approached before. Same goes for GANs, and unsupervised learning in general stays exciting and fresh.

But most of the applications of AI are ho hum. Just use decades old methods, now implemented with neural networks. At least, sometimes. What has really changed is the amount of data available now, and the ability to process it. Not necessarily the approaches to analyzing it.


Stats is getting rebranded as AI, but that's not the extent of it. You approximately solve a traveling salesman problem and it's AI too now. The label is growing to encompass all kinds of algorithmic decision making. You don't even need data, which is the sine qua non of stats.


Neural networks add computational depth. So I would disagree with the statement that AI is just "stats rebranded". That's about as useful an analogy as saying that statistics in practice is just applied linear algebra.


Define computational depth. Non linearity? Parallelizable? Computational depth sounds like hyperbole.

You're still approaching stats problems with the same methodologies. Your just using NNs as your optimizer.


If you are interested, I would suggest reading up on the https://en.wikipedia.org/wiki/Universal_approximation_theore...


There are theorems like that for polynomials and fourier series and all sorts of other function classes too. They are just as practically relevant (or irrelevant).


Sure... I mean it is matrices all the way down but the claim that AI (e.g. deep learning) is just applied statistics is disingenuous.


"AI" is inherently meaningless because it is a moving goalpost. At any point in time, AI essentially means "something normally done by humans that most people don't think computers could do proficiently". When, inevitably, someone programs a computer to do that thing, that thing quickly becomes no longer "AI" and the cycle continues.


I disagree on the how the goalposts are moving. I'll outline the three camps I see.

Some people say AI and mean "a thinking sorta-conscious thing that thinks like we do." Like C3PO or R2D2 or HAL or ourselves. That's their definition. People complain that that's not a rigorous falsifiable concept, so they say, "Okay here's a test that I think can only be done by a thinking sorta-conscious thing that thinks like we do." Then someone clever figures out how to do it without something that fits their definition. They respond, "Okay fine I guess my test was bad." It's kinda moving the goalposts, but their fuzzy conception of what AI means is still unchanged. It just remains hard to make concrete, especially since we can't really define things like consciousness in the first place.

Another camp is marketing. It takes anything it can sell by calling it AI and does so. The goalposts lower. Linear regression is branded as AI now, like it or not. This is the opposite direction of what you're talking about.

A final camp is what you talked about. I think these people are really in the first camp I mentioned. They don't move the goalposts on their concept, they just move the goalposts on their test. But there are also a distinct group who consider it as "something normally done by humans that most people don't think computers could do proficiently" or even just "computerized decision making/information processing."

I'm in the first camp. "AI" is no less meaningless than "consciousness" but it's equally hard to define. Some people have begun using the label "AGI" for it. Same concept whatever your word choice. I think of C3PO but understand that words are defined by usage and maybe linear regression is AI now.


I've been trying to explain to people why I think ML is Stats rebranded but this is the most succinct expression of that sentiment:

> Taking averages, grouped by something? That's AI now.

I think that is right. The algorithm that does the grouped averages is machine learning, and if you put error bars around it, it is stats.

To address your concern: I wouldn't worry about the relevance of applying math and logic to the world. It has always been growing.


There's certainly a lot of BS going on, especially if it helps sales or fundraising or whatnot... but there's also a lot actually happening in terms of ai/computerized-statistics, that regardless of what you call it, is permeating into life.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: