Hacker News new | past | comments | ask | show | jobs | submit login
Learning to Love the AI Bubble (sloanreview.mit.edu)
147 points by KhoomeiK on June 28, 2019 | hide | past | favorite | 56 comments



The interesting thing (to me) about the dotcom bubble, pointed out in one of pg's essays, is how much it got right.

It was right about the economic potential of the www. 20 years later and it is as big a deal as 1999 pundits predicted. It was right that the big winners would be very big, very fast. Google, FB Alibaba, amazon... Bigger than any tech company in 99. It was right about winning early and establishing dominant position... letting network effects and the scaling power of software and the web go to work.

Unfortunately for binary outcomes.. getting 4/5 things right is still a wipeout.

The bubble was slightly off about timing. More big winners were founded/determined in the 5 years after the bubble than during it. It slightly overestimated early mover advantage.. closely related to the timing mistake.

If that bubble is the model for this one... interesting times ahead.


And the reason why the dotcom timing was off was because the internet hadn't already permeated everyday life. That's when the network effect paid off and consolidation merged startups into giants.

Will the same happen to AI? Uses of AI and "AI" seem to take very well to today's world. Everybody wants a piece of the action be it in ad networks, big data, surveillance, cat ear filters or fake nude pics. People are much more technologically literate and the concept in any form will not land in barren land. It's sometimes even frustrating the way people expect a certain level of intelligence from basic applications (eg. try to implement a search function that is not error-flexible and you'll get angry comments).

Internet paved the way to having lots of redundant data we don't know what to do with. I think the world is all too ready to welcome advances in AI, and it's in fact ignored what it will do to financial systems or personal lives.


At least in terms of zeitgeist, it feels like a turning point of some sort when the main conversations around "what does ai even mean?" started to go in a more concrete "can we get it to do this?" direction. It's increasingly becoming a part of of life, even mundane.^ A camera becomes obviously expected to understand who or what it's taking pictures of.

^great example, btw. auto complete and search captures it.


I think this bubble's a weird one in that it's a very different size depending on your point of view. Everything is getting rebranded as AI. Taking averages, grouped by something? That's AI now. Using algorithms to do different things for different people? That's AI now. At least it will be in your press coverage.

One thing is AI to the press and public, another thing is AI to investors, yet another thing for nontechnical workers, and not even a single cohesive thing to people building it all. Wherever you personally draw your lines between AI and not AI, the boundaries do keep expanding. Does that mean the bubble is growing? There are undoubtably more people doing machine learning, and there are more people doing statistics, and more people solving optimization problems, and each other thing that we call AI, but the "AI" label is growing faster than all that. It's a weird bubble. If it pops, does that mean there will be fewer jobs for people like me, or does it just mean people will just stop calling it AI? Or is this just a word's meaning changing and not a large bubble?

This story comes to mind: http://web.archive.org/web/20190626012618/https://gen.medium...


Well said, AI in practice is just stats rebranded.

Neural networks are shiny and new, but they are just an implementation for solutions from stats that have been around for decades.

Regression? MSE loss. Now with a neural network trained on MSE loss.

Classification? Logistic regression with cross entropy loss.

Anomaly detection? Feature extraction? Plenty of people still use PCA, which is nothing new. Autoencoders may get you more mileage, but conceptually work very similarly to PCA for these use cases.

Image data? Use methods from signal processing, also decades old. Convolutions are nothing new, you're just now implementing them with neural networks, and adding a loss function based on what you're trying to predict.

Time series data? You could be better off just sticking to ARIMA. Depends on your use case, but using RNNs may not even work here.

Reinforcement learning is more exciting, and is solving new problems that weren't even being approached before. Same goes for GANs, and unsupervised learning in general stays exciting and fresh.

But most of the applications of AI are ho hum. Just use decades old methods, now implemented with neural networks. At least, sometimes. What has really changed is the amount of data available now, and the ability to process it. Not necessarily the approaches to analyzing it.


Stats is getting rebranded as AI, but that's not the extent of it. You approximately solve a traveling salesman problem and it's AI too now. The label is growing to encompass all kinds of algorithmic decision making. You don't even need data, which is the sine qua non of stats.


Neural networks add computational depth. So I would disagree with the statement that AI is just "stats rebranded". That's about as useful an analogy as saying that statistics in practice is just applied linear algebra.


Define computational depth. Non linearity? Parallelizable? Computational depth sounds like hyperbole.

You're still approaching stats problems with the same methodologies. Your just using NNs as your optimizer.


If you are interested, I would suggest reading up on the https://en.wikipedia.org/wiki/Universal_approximation_theore...


There are theorems like that for polynomials and fourier series and all sorts of other function classes too. They are just as practically relevant (or irrelevant).


Sure... I mean it is matrices all the way down but the claim that AI (e.g. deep learning) is just applied statistics is disingenuous.


"AI" is inherently meaningless because it is a moving goalpost. At any point in time, AI essentially means "something normally done by humans that most people don't think computers could do proficiently". When, inevitably, someone programs a computer to do that thing, that thing quickly becomes no longer "AI" and the cycle continues.


I disagree on the how the goalposts are moving. I'll outline the three camps I see.

Some people say AI and mean "a thinking sorta-conscious thing that thinks like we do." Like C3PO or R2D2 or HAL or ourselves. That's their definition. People complain that that's not a rigorous falsifiable concept, so they say, "Okay here's a test that I think can only be done by a thinking sorta-conscious thing that thinks like we do." Then someone clever figures out how to do it without something that fits their definition. They respond, "Okay fine I guess my test was bad." It's kinda moving the goalposts, but their fuzzy conception of what AI means is still unchanged. It just remains hard to make concrete, especially since we can't really define things like consciousness in the first place.

Another camp is marketing. It takes anything it can sell by calling it AI and does so. The goalposts lower. Linear regression is branded as AI now, like it or not. This is the opposite direction of what you're talking about.

A final camp is what you talked about. I think these people are really in the first camp I mentioned. They don't move the goalposts on their concept, they just move the goalposts on their test. But there are also a distinct group who consider it as "something normally done by humans that most people don't think computers could do proficiently" or even just "computerized decision making/information processing."

I'm in the first camp. "AI" is no less meaningless than "consciousness" but it's equally hard to define. Some people have begun using the label "AGI" for it. Same concept whatever your word choice. I think of C3PO but understand that words are defined by usage and maybe linear regression is AI now.


I've been trying to explain to people why I think ML is Stats rebranded but this is the most succinct expression of that sentiment:

> Taking averages, grouped by something? That's AI now.

I think that is right. The algorithm that does the grouped averages is machine learning, and if you put error bars around it, it is stats.

To address your concern: I wouldn't worry about the relevance of applying math and logic to the world. It has always been growing.


There's certainly a lot of BS going on, especially if it helps sales or fundraising or whatnot... but there's also a lot actually happening in terms of ai/computerized-statistics, that regardless of what you call it, is permeating into life.


Right now there are 5 comments on this post, and all 5 focus on the issue of how to invest in AI. That suggests something about how Hacker News has changed over the years. There is a larger focus on “what stock can I buy” and somewhat less focus on working with the actual tech.


This is a financial article about the effects of an AI bubble on the economy.

While I certainly wouldn't call a technical discussion off topic, I'm not surprised that the majority of focus would be on the main topic of the article -- economics and the stock market.


If I wanted to get started learning about machine learning/AI, where is the best place to do that? I'm a functional programmer who's learned mostly everything about software engineering on the job, and I feel like I don't have the background I need to get started on it; I struggle immensely with math, but have had no problem with my career in software yet thus.

I am going to be traveling for a machine learning convention in two weeks as well, but I'd love for a good place to find some background on this so I can maybe be successful in competing there.


I found this playlist to be helpful as a beginner: https://www.youtube.com/playlist?list=PLblh5JKOoLUICTaGLRoHQ...

If I only had 2 weeks of evenings and weekends to conjure up some ML knowledge, I would start there. Then you could move on to the courses from fast.ai (https://www.fast.ai/)


Thank you. I'm going to peruse this and MIT's offerings as well. I also need to learn the basics of another country's language...


I'm biased, because I took the course when it was called "ML Class" in 2011:

https://www.coursera.org/learn/machine-learning

Everything is done in Octave (ie - open-source matlab-like language); primitives are vectors and matrices - so you'll have to wrap your head around that.

But that course gave me the first explanation as to how neural networks actually worked that I could understand; I had been reading about neural networks for years from various sources - books, online, videos, etc - and nothing ever "clicked" for me (mainly around how backprop worked). For some reason, this did it for me.

Since then, I have taken other MOOCs centered around ML and Deep Learning, mainly with a focus on self-driving vehicles.

Oh - ML Class also led one individual to implement this during the course, as the ALVINN vehicle was mentioned in more than a few ways:

https://blog.davidsingleton.org/nnrccar/

While Singleton does mention its "vintage-ness", I still think it's a sound project for inspiration and learning how to apply a neural network to a simple self-driving vehicle system, not to mention the fact that it replicated a system from the 1980s using today's commodity hardware; I recall reading about ALVINN when I was a kid, with wonderment about how it "worked" - it was one of several 1980s projects in the space that got me hooked on wanting to learn how to make computers learn.


Machine learning is literally just math. You could learn some plug and play without understanding the math, but I'm not sure you would be able to solve any real problems.


My team that I am going to convention with is much better at math than I am, but they haven't touched any code before. I, on the other hand, have touched a fair amount of code in the past three years, but don't have very many foundational math skills.


If you want to learn Deep Learning without all the math - I'm currently releasing a free video per week for something I'm calling the "Summer of AI": https://summerofai.com/

It walks you through all the basics of deep learning (with PyTorch) with a concept video, code video, and then suggested project for each week.


PyTorch is specifically what I was advised to use, so this is perfect. Thank you. Do you have an ETA whatsoever on the image recognition video? I understand they come out weekly.


Thank you Chris! Seemed like a super effective course on a glance. Will be looking through this!


I've gone through a few of the modules, and I can say it's been really effective so far.


Kaggle.com has some competitions and learning modules.


The thing is everyone who thinks AI will be AI in our times are people that don´t understand what really AI is.


> Not all bubbles have negative consequences for the economy

Unless we are using a different definition of "bubble" than it would seem (to me) most people intend when they use that word, this is a factually incorrect statement.

If widespread misallocation of investment capital does not "hurt" the economy, then what does? If capital is invested with a positive outcome for the wrong reasons, then that certainly would not fit the definition.

First applicable definition from a google search:

Bubble: used to refer to a significant, usually rapid, increase in asset prices that is soon followed by a collapse in prices and typically arises from speculation or enthusiasm rather than intrinsic increases in value.


You may find this perspective interesting: https://thehill.com/opinion/finance/356376-black-monday-less...

Defining "bubble" can be surprisingly difficult and in some cases the illusion of a bubble persists after history proves the speculation or enthusiasm to have been correct despite an earlier collapse in prices. So one interpretation of "Not all bubbles have negative consequences for the economy" is that there are events that are widely perceived to have been bubbles that were not, in fact, widespread misallocations of investment capital.


Some economists argued the dot com bubble was a net positive for the economy. While tanking dot coms lost money for their investors the positive externalities like funding broadband networks outweighed that.

Along the lines that Webvan may have tanked but we ended up with Google and Wikipedia. I imagine likewise that most of the present AI startups will tank but we'll end up with useful AI of serious value.


> Some economists argued the dot com bubble was a net positive for the economy.

I am no economist, but virtually any economist I have read would respond to that statement with:

Compared to what?


I guess it would be compared to normal levels of investment and valuations of dot com businesses. Though I'll give you it's a little vague. I read something a while back but can't find it.


I'm a bit skeptical there is a bubble as defined in the article as "when the market value of assets decouple from their intrinsic value and expectations of rising valuations generate investor demand." As a speculator where are these soaring AI stocks for me to punt on?

By the way you can read the article with a trial account from O'Rilley but there's not much too it beyond what's in the summary https://learning.oreilly.com/library/view/learning-to-love/5...


Isn't the lack of stocks to punt on a dysfunction of the investment system? The VC market is doing the speculation, and you are locked out - not because they don't want your money, but because they don't want you to have a chance of getting theirs!


If the bubble is only VC money as you suggest, then I don't see it having very strong negative effects outside of Silicon Valley.

I'm more worried about global economic crises than VCs losing their money.


I've said this many times when I watch VCs invest in some damn fool thing: there should be a market for selling such investments short. Maybe I should ask a VC for backing for this idea.


There's probably some betting shop for this.


I'm not sure to be honest. Google is the company that probably has the most value in its AI projects but the money it makes from advertising dominates. You could argue that buying Tesla stock is a punt on its 'full self driving' stuff but TSLA is not really soaring and people don't believe them.


I personally would put my money on Nvidia when it comes to most value in AI projects, but I don't have inside knowledge with either company.


Nvidia makes shovels. They stand to make a lot of money no matter what AI works out (possibly more than anyone else), all that without having to do any AI themselves.


Making shovels for the AI crowd is also a lot safer than making them for the bit miners.


Kinda and kinda not. They have done well so far but they could conceivably be beat by a dedicated chip. They need make sure they keep up with the research so they build what the community wants.


Except Nvidia publishes some of the best (and most) machine learning papers when compared to other companies or universities


Its research is heavily focussed on showcasing new applications for Nvidia hardware.


How much VC money is going into direct AI / ML startups as opposed to generic consumer cloud or SaaS startups?


„This time it’s different“ sentiment is necessary ingredient of any bubble.


Sometimes it really is different. And sometimes it isn't.


This is the first 'bubble' where I've felt it really may be different. Along the lines of this stuff https://twitter.com/paulg/status/1130133801403858954


Let's hope there isn't another winter.


AI bubble can have negative effects:

Misallocation of limited human resources. Good people go from doing long term fundamental research into developing applications in startups.

Overinvestment and misallocation of capital resources during the bubble can lead to long period of underinvestment once the bubble bursts.


I want to create an AI startup that automatically bids for tulips in the crypto market.


i feel bad for everyone chasing the AI education bubble ...


There are ups and downs as the tide rises. I'm not sure I would call the crests "bubbles".


That´s why: F*uck AI (halisback.com) ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: