Hacker News new | past | comments | ask | show | jobs | submit login
What are the threats to the AI boom? (economist.com)
20 points by viewtransform 55 days ago | hide | past | favorite | 50 comments



I would say once people realize that AI answers and solutions are frequently “convincing but wrong.” AI is amazing, but when I try to use it for anything technical in my field (real estate), I quickly see that it is more than eager to tell you the wrong answer. It’s like reading a consulting intern’s analysis and recommendations. I will keep trying it, but I’m not sure at what point I’ll decide to rely on it, thanks to the garbage output I’ve come across.

I would guess, as with self driving, the initial sprint to 95% functionality is low hanging and requiring only a few 100 billion, but the last 5.9999% that makes it more usable than a human expert is a several more trillion dollars of spend away. That human expert only costs me $250,000/yr and I can hire him by the 6 min increment. We will get there though.


Personally, it blows my mind that there are people out there who haven't experienced LLMs writing paragraphs of polished flowing prose that are ultimately straight up wrong. Personally I only use LLMs for option/context discovery and to refine my own thinking, similar to how I'll bounce ideas off of friends. But even there they tend to get stuck on emphasizing obscure less-popular ideas. So it's more like checking with one friend who is happy to geek out on every topic, but has ended up going down some weird rabbit holes on many.

Still I don't think this is going to be enough to kill the energy for it (humans are confident and wrong quite often as well). But this should be guiding how it's adopted and regulations surrounding it. For example, it shouldn't be used to make any decisions itself, rather just helping a human who is ultimately responsible for applying judgement, with their decisions necessarily rebuttable/appealable. Although given our past several decades of "computer says no" and "policy says no", I'm not so hopeful on this front.


It is a very interesting idea generator, in terms of kicking ideas around with friends.

I think the sticking point is that, continuing with real estate, a very programmatic task I encounter is 'write a contract to purchase XYZ property at ABC address with the following requirements...' Not too different from spitting out some lines of code. An attorney does that for me, he will start with an old contract we've used in the past, and it takes his team 30 minutes to an hour, at a blended rate of $500/hr. So that starts to set an upper bound on the value of work to be performed, in one of the higher per-hour cost lines of work out there.

When you really drill down on the question of "but how will it generate value," it is relegated to being an efficiency tool for many and a replacement for google search for most. The smart will get more effective and powerful with better information, the less intelligent will be running off inaccurate or old info.


Convincing and wrong is pretty damning. I think people fail to appreciate just how trustworthy our computing systems have been till now. You don’t question the results of your spreadsheet. You don’t worry that a word was replaced in a communication with your lawyer. We’ll soon stop tolerating AI that is less than 100% accurate AI.

I predict an AI product bloodbath as users and companies remember that we hold computers to an extraordinarily high standard of accuracy.


A "consulting intern". That's a beautiful way to put it; it shows the problem perfectly. If I need someone to consult with, I don't pick an intern.


A very spiffy polished presentation that was very expensive and... partially correct?


I don't think so for several reasons:

1. That's fairly obvious to people that have used it. I mean I guess there are a lot of people that have never used ChatGPT and think it doesn't bullshit, but those people aren't behind the AI boom.

2. There are loads of applications where 100% accuracy isn't required (even if it would be nice!). Obvious example is GitHub Copilot. It saves me a ton of time overall even if I often have to fix its mistakes and bullshit.

3. I imagine like half of the AI research community is working on fixing this. And they don't need to get it to 0% bullshit, just less bullshit than humans (which tbf is sometimes a low bar!)


I dont dispute the incredible capability, I'm simply concerned that true value creation that is being wildly overblown.

Your 2nd example is a handy one to pick through. What % of your time is spent writing code compared to other work tasks? How much of that time did Copilot save? How much time do you have to spend fixing and validating (be honest, as if your quarterly bonus depends on it)? Whats your hourly rate? Whats the real cost of the high quality, cutting edge LLM (not the VC subsidized 'price').

And then.. does that mean you get to go to lunch early or didnt have to stay late? (no savings to your company). Or did it knock out a week's worth of work? What prevents Copilot from taking over your role completely ( I doubt it is anywhere close to that, you likely do a lot more than type out code, still requires the correct input and output validation, understanding goals with nuance and uncertainty)

So then in real dollars, it's a useful, potentially very expensive tool. which is great. Is it $48B great (for google)? Maybe! But thats a big bet!

I admit I do not understand the concept of researchers programming their models to be more accurate as questions progress from easy to esoteric, from 2+2=4 to "Should I avoid drinking corn syrup?" (ahem advertisers would like a word!) "which religion is better, A or B?" I dont see a way for AI to be unbiased and clean, and therefore trustworthy and useful at face value.


> What % of your time is spent writing code compared to other work tasks?

Probably 50%?

> How much of that time did Copilot save?

Unfortunately I can't use it at work, but I use it in my free time and I'd say it increases productivity by something like 10-50% depending on the task. 10% is more common of course, but given how expensive developer time is a Copilot subscription is worth it if it increases my programming productivity by like 0.5% which it easily exceeds.

> And then.. does that mean you get to go to lunch early or didnt have to stay late? (no savings to your company). Or did it knock out a week's worth of work?

My company would reap most of the benefit because they are paying me hourly.

> What prevents Copilot from taking over your role completely ( I doubt it is anywhere close to that,

Yeah basically it's nowhere near smart enough. Needs several orders of magnitude more intelligence before it could fully replace me. At that point I think society will have bigger problems because it will have replaced 90% of white collar jobs in general; not just programming.

> I dont see a way for AI to be unbiased and clean, and therefore trustworthy and useful at face value.

This also eliminates humans.


All great points, thanks.

> Needs several orders of magnitude more intelligence before it could fully replace me.

I think this is true for many applications of AI, where everyone assumes everyone elses job is screwed, but the expert looks under the hood and says 'well isn't that cute. delete it and start over.'


>And they don't need to get it to 0% bullshit, just less bullshit than humans

Less bullshit than humans is a terrible goal.

If you take the real estate industry examples, you want less bullshit than a really good real estate reference book, not less bullshit than random people on reddit.


I didn't say which humans. Humans wrote that real estate reference book.


>I didn't say which humans.

"which tbf is sometimes a low bar!"


Yes, sometimes. I'm not sure what your point is.


Watching the ads during US olympics coverage and I’d say we’re right where we were in February 2022 for crypto. Every other ad is some very pricy AI ad. I think we’re likely in for a pop in maybe 6-9 months and then of course it will just slowly become normal.


I’m using AI in my daily life. ChatGPT has largely replaced Google. My programming output and ability is greatly enhanced. I’m working on projects that were impossible a few years ago. How can people say this is another hype bubble?


Some people say that it has greatly enhanced their abilities, other people say that it's mostly useless. I am myself in the latter camp. I never build the habit of using it because it failed miserably on relatively simple things. However, even if it was smarter, I can hardly see how it could be faster to design the prompt, correct it multiple times vs. just typing code in flow state.


What about things like copilot that are simply predictive autocomplete? It doesn’t pull you out of flow state but dramatically increases typing speed, especially for boilerplate.


It absolutely kills flow state for me personally because I’m being shown code completions that may or may not have anything to do with what I am actually about to write.

When I allow myself to review the relevance and correctness of the attempted completion, I have to set aside the actual code that I was envisioning. This may or may not be easily recoverable after the (generally wrong) suggested completion is dismissed.

I am not neurotypical, and we are all different anyway, so it is obviously different for others. But I disabled the active code assistance option at work almost immediately due to it being actively harmful to maintaining flow state.


Investors aren't hyped about AI being a nice quality-of-life tool for the general populace. They are investing on the notion that it will eventually replace a lot of employees and make the requirement of expensive skilled labor obsolete.


Was the internet a revolutionary technology? Was there a very real dot-com bubble?


To expand on your point:

My lesson from the dot com boom is that's it's very hard to pick the winners. For every Amazon, there were a thousand pets.com. Investing in DEC, Sun, Yahoo, and friends would not have paid off.

That's a broader lesson too. I have an okay -- far from perfect but decent -- track record knowing which national economies will grow, but investing in index funds for those economies rarely yields good returns. I have a tiny portion of my portfolio in various developing markets, and the GDP can rise tenfold with no investment gains.

Many very real revolutions have bubbles. Bubbles are the period when people over-invest.


Because some other people are operating under the assumption that this wave of AI models will automate all labor on the planet, instead of making the average desk jockey 10-25 percent more efficient.


A 25 percent efficiency boost to desk jockey labor is a profound impact.


An impact on what? Most desk jockey labor hasn't been optimizing for efficiency for decades. I'd say the amount of "work" will simply go up by 25%, and the position of the human-based processing nodes will shift a little bit. But it's a Schelling point attractor that mutually-antagonistic companies will be pushed into adopting. You certainly won't want to hire 25% more human employees at your own company to handle your counterparties' increased paperwork generation from their use of AI.


For who ?

Nearly all the wealth generated by productivity gains of the last 50 years has been pocketed by business owners and investors

https://assets.weforum.org/editor/HFNnYrqruqvI_-Skg2C7ZYjdcX...


Feel like you’re answering your own question here


> A 25 percent efficiency boost to desk jockey labor is a profound impact.

In terms of 25 productivity, yeah, I think that is a good thing and could be true. However, the problem is that it is sold under the pretense that you can fire a large chunk of workers and rely in large part on AI, which makes it a whole lot more attractive to investors.


That is precisely what you can do if your workers are 25% more efficient.

Yes just a matter of recognizing it’s 25% of various job titles, not 100% of 25% of job titles

Or alternatively, drastically lower the skill requirement for the job and replace your skilled workers with cheap unskilled workers


> That is precisely what you can do if your workers are 25% more efficient.

If AI is just a tool that improves performance by 25% it changes what investors believe. It is a lot more expensive than 25% increase in productivity as investors were sold a different reality.

> Or alternatively, drastically lower the skill requirement for the job and replace your skilled workers with cheap unskilled workers

With the error rate of GenAI I'm curious to see how this plays out in the long run. It seems that only more skilled workers can pick up on the wrong answers.


Shrug. I’m not the one claiming 25% is a good figure. Just that it’s a huge deal, if true.


I find it insanely useful for moderately basic tasks I don't do often e.g. "write me a regex that will match XYZ", or basic latex formatting. Much more efficient than googling for docs!


Whilst the benefits of AI are real, the public's expectations exceed reality.

In other words, a bubble occurs when expectations >> reality.



I think the saddest thing with this that it shows how we humans act irrational while thinking we act rational.

Investors are throwing endless amounts of money to companies with a "dead end" product (current image Gen/LLM has a hard limit on improvement and only a whole new approach will surpass it).

Since the reality is that if we want real Ai we ought to invest in neuroscience and fields that can then convert to programmable logic (such as we can now create rudimentary emotions via logic gates).


We have no idea the growth curve here. People make predictions like this, or on the other side, of singularity, but it's all just speculation.

A good approach is to plan on all things, from plateau to linear growth to polynomial to exponential to singularity, as within the realm of possibility.

Or to have the humility to acknowledge that we may be wrong.


“We have no idea the growth curve” does not merit a “no matter the price” investment model.

What we do have is theory, hubris, potential, and speculation. And FOMO.

Genuine question- Have any AI product marketers slapped a price tag on their services that comes remotely close to the cost of providing that service?


There is some research toward understanding LLM limits, and it's not promising. https://www.youtube.com/watch?v=dDUC-LqVrPU


What will kill the boom and wake people up from this mass hysteria is an outside AI economic shock.

Of course, the maddness of crowds would conclude the risk here is underinvestmemt.


https://www.dexerto.com/tech/openai-on-verge-of-bankruptcy-h...

When VCs sell their 2021-2023 bags, if the technology is still not yet economically self sustaining, it's over.


nor OpenAI or any of the prompt-based AI companies actually "need" the reveneu from the services they sell, the whole point of having a public (free or not), prompt facing the entire planet is just having live humans doing RLHF 24x7x365, that information, that dataset is more valuable than any symbolic amount of money anybody is willing to pay for any GPT or clone suscription.

Anybody noticed already that any current or near future reveneu won't make a dench in the actual costs of running giant models, anyway, the models keep chugging along just fine. And the ("free") money keeps flowing in.


Moroever, those millions of non-paying clients prompting the models are 24x7x265 working with 100% real world problems are inputing the models with valuable prompts, generating valuable content (originated in real situations, actually distilled by unvaluable billions of human sensory input).

That content can be and is used to train models, effectively cancelling the "data-wall", bit by bit, all the time.


> That content can be and is used to train models, effectively cancelling the "data-wall", bit by bit, all the time.

It really cannot.


I'm not familiar with this topic.

Could you elaborate?


There’s so just much missing information or changing contexts. Forget any specific model. You will not become particularly good at any domain specific task by reading prompts of people asking questions. That data sucks.


Not really, China is just a step behind, in a year they will be at current US AI state-of-art, without competition, from there they'll have all the GPUs in the world to keep improving their models.

Or most probably, US national interests will step in, just like it did with SpaceX, and will provide any amount of billions required to keep in the fight, at least till China steps out of the AI race.

Beside the (naive?), naysayers, until the technology actually shows it is failing to keep the promise of super powerful AIs, it is a global, geopolitical race to get there (AGI/ASI), or till the point it fails (new AI winter).

To make quite sure it really doesn't work, and not dropping out of the race just to see a chinese ASI (Artificial Super Intelligence), emerge a couple of years later.


China isn't even close to state of the art. Take a look at the chips they put out, the Chinese repeatedly “claim” they are building chips equivalent to a 4 or 5 year old Intel chip (Which might as well be a century ago in terms of technology) until a tech YouTuber in the US actually gets their hands on one and realizes the benchmarks are completely fabricated.


Is there a particular video or channel you'd suggest?


Lots of insightful comments here, just like when Facebook stock took a short down moment and we were able to explain how Mark Zuckerberg didn't get it https://news.ycombinator.com/item?id=18450058


That it is a scam




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: