Hacker News new | past | comments | ask | show | jobs | submit login
Money is pouring into AI. Skeptics say it’s a ‘grift shift’ (institutionalinvestor.com)
136 points by flarecoder on Aug 31, 2023 | hide | past | favorite | 210 comments



Investors who do nothing but follow the crowd dumped a tonne of money into cryptocurrency / blockchain startups regardless of merit. Now they are doing the same with AI. But AI actually has clear, immediate, and lasting value. Articles like this seem to be written by people who don’t understand the basic concepts about people who don’t understand the basic concepts. Both sets of people just seem to identify the trends and mistake the trend for the overall value. AI only looks like cryptocurrency / blockchain if you don’t know anything at all about either one beyond “they are trendy and attract people who like investing in trendy things”.


I think you are being harsh on the critics. A technology can have tremendous value while still attracting grifters and being an early investors' graveyard.

Best example is the dot com bubble in 1996 - 2001, the web had truly world-changing potential, but grifting was rife and practically all early investors lost their money unless you were one of the lucky ones who invested in amazon, yahoo, google or ebay out of the 10,000 companies that were shilling back then (and even then you had to be very patient not to sell your stock for decades to make the really big gains - see the history of amazon and apple stock during this era).

There are many such examples in history of technology concerning grifters and the fate of early investors see also automobiles and printing press (even in printing press Guttenberg lost a lot of money and had many grifters/copycats)


Most companies failed, not necessarily most investors.

If you'd put your money into a Nasdaq based index fund in 1998 (that reinvested all dividends), you would have a pretty good return up to today.

Most good investors either spread out like that, or are very good at reasearching what companies have real promise and what do not.


> A technology can have tremendous value while still attracting grifters and being an early investors' graveyard.

Not sure how this compares to other recent technology revolutions, but most AI startups seem to be one feature announcement from Microsoft away from bancrupcy. So even with the best intentions of founders, it seems like a really volatile market.


This is very similar to the app start-up craze that apple triggered. After every new iOS release or new FB announcement there were a multitude of app startups that were made redundant. For example there were plenty of startups focused on bookmarking websites, or sharing notes, or allowing in-game adverts.


Couldn’t agree more. Crypto was a monetizable solution in search of a problem.

AI solves real problems, and has been doing so for many years. Generative AI is just the latest and most accessible instantiating of it.

My first company was a dialup ISP in the 90s. I look at GPT4 and I see the 56K modem of AI. There is so much upside to be discovered, we have barely started.


> Couldn’t agree more. Crypto was a monetizable solution in search of a problem.

True

> AI solves real problems

AI research and real products, most probably. All those new companies/startups that are wrappers around ChatGPT? Not that much. Maybe someone will do something that really brings value but most of them sound and act like the Blockchain startups we had til 12 months ago.


GPT wrapper products are the equivalent of static corporate HTML pages from the 90s. We build them because we are the early adopters of a brand new technology and we’re working out how it works.

I’m working with such a company and yes, the first product is a (very nuanced and very domain specific) wrapper around GPT. But as we gain experience, we expect to develop far more complex and interesting products. But we gotta start somewhere.

2 years ago, nobody even knew about ChatGPT. We are at the very start of this wave and we’re just still plucking the low hanging fruit. The fact that GPT wrappers are useful speaks directly to the potential of the technology.

Contrast this with crypto, which wasn’t even capable of solving the problems it was supposed to solve, and where losses incurred by things like smart contracts were almost inevitable. The products being built with AI are real products that real people are paying for. The difference couldn’t be more stark.


> 2 years ago, nobody even knew about ChatGPT

So chatGPT isn’t 2 years old, but i recently discovered that if you search HN for GPT-2, you’ll discover years of people discussing it, below hype levels. You’ll find “did GPT2 write this” type comments that we see today with 3,4. You’ll see people train it against their texts to talk to themselves. A lot of the ChatGPT induced hype hypotheticals were already done.


> A lot of the ChatGPT induced hype hypotheticals were already done

I really disagree with this.

The more broadly useful and available this technology becomes, the more interesting and unexpected uses will be discovered for it. The people I'm working with now are not remotely software people but they are generating real value from GPT in ways I would never have imagined. I look at what they're doing and I think to myself - a seasoned software engineer - OMG that's incredible, I would never have thought to use it that way.

That's why I say that GPT4 is the 56K modem of AI. Until now it's just been largely software geeks. But now that it's getting out into the broader world, people are coming up with novel ways to use it that are unexpected and occasionally ingenious. Ways to help people that most software engineers wouldn't have the domain experience to think of.

And they're just getting started.


Have you seen the new AI photoshop where you can just AI fill anything? And it just works?

The amount of people who can edit photo's is now basically unlimited. That's thanks to generative AI.

Same like how the computers enabled billions to do the most complex calculations within a second, while before you needed to learn that skill over years or practise.


> Have you seen the new AI photoshop where you can just AI fill anything? And it just works?

> The amount of people who can edit photo's is now basically unlimited. That's thanks to generative AI.

It's also available at zero cost, thanks to the ecosystem around Stable Diffusion.

You probably know the meme that gets parroted randomly about OpenAI needing a moat; well, what's Adobe's moat, now that this exists?


Adobe's moat is that GIMP still cannot properly edit 4-channel CMYK images. And none of the competitors can print with ICC profile correction. I kid you not, there is no way around Adobe for any kind of serious image editing.

Adobe's second moat is PDF. And it builds upon their imaging moat because companies want their product boxes to be sharp and with accurate colors, so there is an entire industry built around PDF-based print and cut and glue manufacturing.

So whether or not they throw an AI bone to the kids is probably irrelevant to their bottom line. My guess would be that Adobe added those AI features mainly to make their company look more sexy to potential future hires.


CYMK and ICC profiles are irrelevant in gamedev and vfx industries though.


You need ICC if you want your digital to film printer to generate accurate results.

But yeah, for gamedev Adobe Photoshop doesn't have that much benefit over specialized tools like Substance Painter .. oh wait, they recently bought that company ;)


Adobe's business model is to provide you the benefits of wild-west AI with the legal stability of an Adobe service.

With Firefly they offer a tool an art agency can use daily without having to worry about hidden legal implications of using AI...


The design world runs on Creative Suite, and the Figma acquisition improved that position. There's a network effect to the software, the file format, and creative team collaboration tools.

Given this, competitors need to be meaningfully better, not just equivalent or somewhat better. Adobe is still in the position to be the go to product suite for creative professionals.


> well, what's Adobe's moat, now that this exists?

Although this is debatable, lets accept it for the sake of argument that adobe has no moat.

If this is the case, this may be bad for adobe, but it is amazing for the design world. Producing more art for cheaper is real value.

And we may not know who specifically will capture that value, but it is undeniably value none the less that will go somewhere. That still benefits the economy as a whole, even if we can't guess which AI startup will win the race.


> That still benefits the economy as a whole, even if we can't guess which AI startup will win the race.

One possible outcome is that the entire economy benefits without any AI startup winning — even if Stability AI goes under for whatever reason, the various Stable Diffusion models are still out there getting used to make pictures.


> Producing more art for cheaper is real value.

Except that if art production becomes too easy, the prices go down, and the only party benefiting is the consumer/customer.


Fortunately, the only party that matters is the consumer.

Businesses may exist and be necessary to serve the consumer. But they are not valuable in and of themselves.


As if that's a bad thing.


Generally, it's not.

Though it tends to disappear into effects that are taken for granted.

Like the improvement in mobile phone quality since 1990.

Just like how easy access to music through Spotify may make us appreciate music less than if we have to actually put a vinyl record on, easily accessed art may also reduce our perceived appreciation.


Well stable diffusion’s parent company is nearly dead due to funding issues. So competing isn’t something they’re succeeding at.

The moat memo misses the truth that google knows: you need existing customers. Ideally paying customers.

Firefly has doesn’t have a moat, it has a well. It has a way to sustain and fund development and growth and servers. That’s why google doesn’t need a moat. They have the funds to take on anyone, and probably win.


I'm regularly using the Photoshop function to test its limits and applications to our printing business. It's kind of impressive at first but it lacks quality where it matters, especially with the more complicated jobs.

Also "editing photos" is more than what the AI does. We have solutions for that, on another level though, for the last 15-20 years already.


> It's kind of impressive at first but it lacks quality where it matters, especially with the more complicated jobs.

Same with the higher-complexity written tasks I’ve been chipping away at. The quality dropoff is a cliff. Crossing that chasm feels like it will take more than incremental improvement of current tools; it feels like an entirely new technology is needed.


I'd expect the real implementation to not be nearly as good as the demo shows.


My concern would be that the investors can't tell the good from the bad and will throw money at anything that has "AI" checkbox on the box. So money won't necessarily to the projects where it's could generate the most value. Large amounts of money is going to go to those who promise the most amazing things and really hype the AI.

In the end I doubt that those who are able to use AI most effectively will need to slap the "AI" label on their products, they'll just be the better products.

As much as I believe crypto-currencies to be mostly a scam, blockchain technology exists in products most of us aren't even aware of, because it's a neat solution to a certain set of problems. It's just not all that sexy.


I think the problem was "buy drugs online"


None of those "knowing why others are stupid" and "AI is muchmuchmore useful" matters if the topic is rampaged and distorted by clueless pile of wealth into momentary apparent fame instead of developed sensibly. Crypto and blockchain could have had much better merit if it's reputation was not destoyed by forced through careless mania. The society will f up AI into something of a freak, maybe even quicker due to its powerful hence scary nature, if maniacs push into something where it causes serious damages or cathastrophy, triggering hard regulations and bans. At least for our lifetime, slowing its much needed introduction to life to several generations timespan. AI could, and probably will, be castrated rapidly by the frenzy of huge piles of clueless money.


The distance between "sort of works" and "works" for AI is considerable. Not infinite.

Look at self-driving cars. The first tries were in the late 1950s, with GM's Firebird 3, guided by wires in the road. By the 1980s, the first self-driving vehicles were moving around CMU, very slowly. By the early 1990s, experimental highway driving had been demoed. In the early 2000s, we had the DARPA Grand Challenge, which had off-road driving on empty roads working. Then there were a few experimental self-driving cars that sort of worked on general roads. Many startups, most went bust.

Today you can take a driverless cab in San Francisco. 64 years since GM's Firebird III. (Which still exists, in driveable condition, in GM's in-house collection.)

It may take a while to get from GPT-3 to Microsoft Middle Manager 2.0. But the path is clear now.


> The distance between "sort of works" and "works" for AI is considerable. Not infinite. > Today you can take a driverless cab in San Francisco [...]

From the outside, it sure does look like driverless is still firmly at "sort of works":

"After California regulators approved the expansion of driverless taxi services in San Francisco earlier this month, it took only a little more than 24 hours for a series of events to begin that seemed to justify the taxis’ detractors.

The day after the vote, 10 autonomous vehicles operated by Cruise, a subsidiary of General Motors, abruptly stopped functioning in the middle of a busy street in the North Beach neighborhood of San Francisco. Posts to social media showed the cars jammed up, their hazard lights flashing, blocking traffic for 15 minutes.

A few days later, another Cruise vehicle drove into a paving project in the Western Addition and got stuck in freshly poured concrete.

And then last week, a Cruise car collided with a fire truck in the city, injuring a passenger in the car.

So it was that last Friday Cruise agreed to a request from the California Department of Motor Vehicles to cut in half the number of vehicles it operated in San Francisco, even though regulatory approval for more remained in place. The company, which has had 400 driverless vehicles operating in the city, will now have no more than 50 cars running during the day and 150 at night."[0]

[0] https://www.nytimes.com/2023/08/22/us/california-autonomous-...


That's Cruise. Waymo has driven 1 million miles as of last January with only two incidents that have met the government's reporting criteria and no injuries. Those stats are impressive.

https://waymo.com/blog/2023/02/first-million-rider-only-mile...


Cruise started as a "fake it til you make it" operation. The tradition continues.


Does Microsoft Middle Manager 2.0 stop working in the presence of traffic cones?

More seriously, the distance between "sort of works" and "works" might not be infinite, but it most likely involves fundamentally unpredictable future developments of the current technology. There is no straight line of incremental improvements that gets us there.

It's fairly straightforward to imagine that if you have a 4.77mhz CPU and 64kb of RAM you will soon have a 3ghz CPU and 64gb RAM.

Bigger number going brrr is no guarantee of anything, here, so much so that models using a fraction of GPT4's resources are somewhat competitive.

By all means continue developing the technology, but claims that we are within arms' reach of X for disparate values of X are not exactly supported by anything.


> Does Microsoft Middle Manager 2.0 stop working in the presence of traffic cones?

I think if you put a traffic cone on my IRL managers head she’d stop working too. Maybe a bit different… but maybe not.

I don’t think the poster was saying we’re within arms reach of it, but that there’s a path that takes us from here to there. A MSFT AI manager obviously wouldn’t behave exactly like a real human, but a tool that aggregates and summarizes information from many reports for a high level manager, and helps negotiate priorities is something potentially doable with some prompt engineering and advancements in models.

I respect many of my past managers, and some of them were great mentors and materially improved my life but 50+% of the value managers provide to an organization can be done with a bit of glue between GPT-x and Jira. It’d free up a lot of their time for the remaining 50% too.


> Does Microsoft Middle Manager 2.0 stop working in the presence of traffic cones?

That could be a benefit. Imagine being able to go dev shields up with just cones around your desk.

When it is time to innovate, surround yourself with cones.

"You haven't checked on your 'resources' in an hour, how is resource #4233 (Johnson) doing on that new feature?" -- funding borg consultcult front man

"I don't know, his cone shields are up, we can't get our needed hourly metrics" -- Microsoft Middle Manager 2.0


> Does Microsoft Middle Manager 2.0 stop working in the presence of traffic cones?

Yes, but it integrates well into Outlook.


> Microsoft Middle Manager 2.0

I think we've found the common thread between AI and crypto: the dumping of externalities and increase in energy usage.

Do people really want to work for Microsoft Middle Manager? Have we not seen enough horror stories about metric-slavery in Amazon warehouses and the gig economy? It might be cheaper, but it's also worse, for a class of people who don't get any input in the decision. Similarly self-driving unleashes a new class of poorly behaved "learner" drivers on the road, who may be less aggressive but are also capable of causing problems from that very timidity and lack of general competence.


The other common thread is it’s all the same grifters. The same people who hyped up the worlds slowest coal powered linked list as the future of money are now hyping up AI. I do think AI is somewhat different as we will be left with more to show for it than “PayPal for Cambodian human traffickers running pig butchering scams.”

My banal take is: notwithstanding above, all the risk premium is gone for investors and employees. Everything in the AI space is priced assuming flawless execution over a 20 year time frame. As a potential employee - or investor - that equity is just not interesting to me.


> The other common thread is it’s all the same grifters.

Yes, the same grifters are migrating to AI. But there's a real industry in AI, unlike blockchain, where it was almost all grifters.

The pure scam industries, day trading, binary options, cryptocurrency, contracts for difference, retail FOREX, etc. do have many of the same people. If you want to follow this, there's Offshore Alert.[1]

[1] https://www.offshorealert.com/


> same grifters are migrating to AI

This just in:

"MetaQuiz is the brainchild of MetFi DAO, a true trailblazer, investor, and incubator in the realms of AI, Metaverse, and Web3. With its unwavering commitment to innovation, MetFi DAO is reshaping education and entertainment in unprecedented ways. The platform aspires to redefine how individuals from diverse backgrounds learn, grow, and earn rewards in a secure and engaging environment. ... MetaQuiz introduces an unprecedented pathway, where the allure of learning seamlessly merges with tangible opportunities for earning."

AI, Metaverse, Web3, DAO, education, and gambling! What could possibly go wrong?


Just you wait until it realizes that firing you would result in a net carbon usage savings for the company.


That's true for driverless cars but there's many areas where you don't need 100% to be useful. Stable diffusion and LLMs are 90% there but still very useful and cool.


Most of those areas don't normally have a lot of value or you're marginally improving on existing recommendation systems. It's a coin toss on how much of an improvement this wave of LLM powered ML systems will have over the previous wave of "enterprise knowledge collation" systems and if that will be enough for people to buy them.


People seem to be buying them already, looks like OpenAI is now pulling in $80M per month in revenue.


Self-driving is never going to happen, and its also sitting in some sort of informational blind spot for the people working on it - I have no idea why.

There is no such thing as "driving" there is no physical force, or particle. There is no force preventing you from driving in the opposite direction of traffic, or through glass panes.

"Driving" is entirely a social phenomnon, the confluence of societal self impositions and engineering.

If you have a car on fire in front of you, you will need to reverse in the wrong direction of traffic.

In many countries - you have to regularly deal with drivers going full tilt, on the wrong side of the road.

Or You have to deal with theft, and people trying to rob you at every red light.

Im underscoring that this is a social issue. You would need to create models for each country and region, to truly improve self driving.

Self driving assumes a far narrower problem space than reality gives a fig for.

Self driving theory currently works in the same way any theory that assumes spherical cows works.


> In many countries - you have to regularly deal with drivers going full tilt, on the wrong side of the road.

I don't agree with most of your comment but this point is worth examining. I think it's not an argument against self driving for a few reasons:

1. There's lots of places in the world where people driving on the wrong side of the road is uncommon. We can start with self driving there. You can apply this to many other situations as well - AI can't handle ice yet? Well, let's start with non-icy roads. Even when you apply all the stipulations like these that you need, you'll still be left with a large enough percentage of the world to make self driving useful. Especially since a lot of the places that are suitable will be rich cities in developed countries.

2. Driving habits change. Thailand is a good example of this, it's a country in transition from the "developing country driving style" to the "developed country driving style", for want of better terms. Driving there 15 years ago was an extremely different, far noiser, far more dangerous experience than driving there now. It's still got a long way to go, of course.

3. If self driving becomes the norm, well then, problem solved. We don't even have to get to FSD. Partial but always on assisted driving that nags you whenever you drive on the wrong side of the road or go over the speed limit would probably be enough to cause a shift if most people have it.


Therein lies the rub

Movement, is animal like - perceiving self, environment and understanding movement through it. - Movement is universal. Its the application of physics.

Thats not driving.

Driving is a social construct. It is the application of physics while navigating a social world.

Driving is observing the law, observing social constructs (that differ regionally), adapting to new constructs based on location and environment.

Thats the blind spot for self driving proponents. They conflate the two things, but talk primarily about the first.

As a result, you will never get self driving - the assumptions are wrong.

Let me put it this way - you get self driving when a car decides its best course of action is to reverse in traffic, because it perceives a tsunami coming from ahead of it.


> Driving is a social construct. It is the application of physics while navigating a social world.

Interesting way of looking at it. In the countries where people tend to drive at high speed on the wrong side of the road, I'm inclined to agree with you.

In countries where people follow the rules of the road, I think it's the opposite. There, it's primates who evolved to follow and react to a complex social and physical environment being forced to follow a simple and rather tedious set of rules, which require constant vigilant attention. This is something we humans are not good at - we're designed for short bursts of attention in situations where rules need to be interpreted intuitively, not driving for three hours while strictly following the rules of the road.

> observing social constructs (that differ regionally)

Well, the social constructs around driving also different temporally and are changing all the time. Self driving, even partially implemented, is sure to have profound effects on these cultures all over the world.

>perceives a tsunami coming from ahead of it.

In extreme cases like tsunami, floods, or fires, there are solutions. In the short (and probably medium) term the cars will still have human controls, so the human can instantly take over. In the long term, well maybe there can be a "panic button" that allows a human to shout voice controls. Either way, it's just another solvable technical issue, and probably much easier than many other issues that need to be solved.

BTW it's not like researchers haven't considered to these things

https://www.thecarconnection.com/news/1122534_self-driving-c...


I don't understand the leap from "it's difficult and there are many special circumstances" to "it's never going to happen". I don't get why you think that any of these problems are unsolvable given enough time.


Here's one way to get to the "it's never going to happen" outcome:

The grand-parent comment argues that we need AI that works in a much broader set of circumstances to "solve" self driving. In particular it would need to understand how other humans would react to its actions in novel situations. That is approximately a description of having a Theory of Mind. Some argue that you can't have a Theory of Mind without being conscious. We might ban captive conciousnesses for ethical reasons.

I think you could invent lots of other scenarios that yield the "never going to happen" outcome. They probably all sound ludicrous, because having an AI that understands the workings of human minds sounds ludicrous (and frightening).


Actually, I'm not saying that. I'm saying that self-driving is nearing success, and that it took a while. LLMs are at the beginning of that process.


To me this is a technocratic stance - believing that technology will always find a solution and absolve us of all problems.

Maybe. Maybe not. What I read from his post is much closer in spirit to "we cannot simulate real physics down to the atom" when modeling protein folding so what we do instead is building models (good enough estimations) and AI approaches to recognize complex (but still top level) patterns.

These reduced views on reality help our problem space to a large degree but they can never account for the full scope of the physical reality and they largely work on assumptions which might be proven wrong at any point in time.


Thats not the point.

The point is people who are proponents of self driving have a blind spot. Its construed as a code problem, ignoring entirely the massive social aspect of human behavior.

Driving isn't locomotion. You can't assume that the people around you are going to be reasonable, sane or predictable. More data doesn't fix that, unless you start building a "civilzed behavior" model and then add that to your movement model.

I really want to see how honest people are when they build a fair representation of human civilization, warts and all.

Self driving as has been defined assumes some absurd things about the world in which humans live. No one is going to buy a car which wont know to run when they are about to get robbed.

To be blunt, the blind spot assumes you live in America (and probably california), not that you live in Brazil or India or Egypt.


Because no matter how much time you give it, there are social problems that need to be solved first; and said social problems are not the kind that are amenable to mere technical solutions.

Literally the case of: If the technically perfect implementation existed tomorrow, we still would not be ready to flip the switch because of how drastic the reorg of societal norms would be. It'd be a complete refactor.


This comment is pretty funny in the context of a world where there are currently cars driving millions of miles without human intervention in a variety of conditions. I don't think anyone cares whether it meets your personal definition of "self driving".

Reminds of the Chinese proverb: “Man who say it cannot be done should not interrupt man doing it.”


Cmon, - you are the one misrepresenting the actual definition of self driving - aka Level 5.

No vehicle is at level 5.

Calling it “my definition” and then switching the actual definition is unfair.


Who decided that only level 5 qualifies as "self-driving"?


This comment is pretty funny in the context of someone starting out by claiming no cares “whether it meets your personal definition of "self driving".”


No, you said that I changed the “actual definition” of self-driving from Level 5 to something else. I’m asking what makes Level 5 the “actual definition” and not just an arbitrary point that you’ve personally decided means self-driving?


I guess like heavier than air powered flight will never happen. I mean there would be so many problems.


Indeed, it would probably require the invention of a true Artificial Intelligence, and of high quality as well. Not the glorified autocomplete the scammers are trying to pass as AI now, but the real thing that can sense and understand the world around it.


it'll happen but the car won't really be self driving, I think a literal ai robot cheaufer is more likely. AGI will be able to do everything a human can do, including driving. We're IMHO less than a decade, if not less than half a decade from that.

If you don't believe me name a scatter chart showing ai papers since 2017. Notice how this year dwarfs all previous years and last year blew the water out of the previous ones but the ones before generative art models were a bit slower and more regular.

If we can perfect ai agents and automated LLMs, we could create personified agents of varying backgrounds in a virtual computer lab tasked with doing ai research 24/7. We could put 60 ai scientists in the lab and just watch them work.

Maybe the lab is vr such that human scientists could enter and collaborate with ai counter parts. imagine applying this to cancer research, etc.

When an LLM can train itself, program children(next version), and submit orders or plans for new hardware fabs to a factory to breach current limitations, then we basically have von Neumann probes that multiply, grow, learn, and don't need human intervention.


LLMs are nowhere near being able to research other, better LLMs. It's not even known if existing LLMs are at 1% or at 99.99% of what an LLM can in principle achieve.


Being able to take a cab in a single tiny part of the world remains firmly in “sort of works” camp.


Also when you start to get in more extreme but not actually extreme scenarios. I wonder how well will these do with let's say 5cm of fresh snow. With no other traffic yet meaning no road markings are visible, and in bad case signs could be covered. Probably a yearly scenario in many parts of the world.


>It may take a while to get from GPT-3 to Microsoft Middle Manager 2.0. But the path is clear now.

I was intrigued by the announced Business Chat feature for Microsoft Teams <https://www.reddit.com/r/singularity/comments/11swyeu/introd...>, but learned that it is just summaries of conversations. That's not quite what I'd imagined, which is something like this:

----

A: ... and that is why I think we should go with option 1.

B: No, the points you mentioned support my case for option 2.

C: Nothing you guys have said changes my mind about option 3 being best.

D: Business Chat, what do you think?

BC: Based on this discussion, and my research, option 1 seems more realistic but option 2 would be more profitable if possible. My reasons are ...

C: Business Chat and you guys all don't understand point N, which is the main reason why option 3 is best.

B: Higher profit is exactly why I think option 2 is the way to go.

A: No, our rival is going to hit the market next month. We need to get something out there ASAP. Option 1 can do this.

D: You've all given me things to think about. Thank you for coming. Business Chat, email me a summary of the meeting, and set up a followup meeting for Tuesday 3pm.

----

That is, AI used as a colleague/assistant, not necessarily subordinate but not seen as omniscient, either; another viewpoint to consider. Like you said, Middle Manager 2.0. When do you think the above will be feasible? This year? Five years?


It might be more an issue of "if" than "when". Can LLM even go beyond being "interactive textbook"? On the other hand, looking at constant complaints from data scientists, ChatGPT can replace them here and now. It might not be able to understand the problems, but supplying boss with arguments that support their decision is the matter of weeks, maybe just an extra preamble/prompt will be enough.


> It may take a while to get from GPT-3 to Microsoft Middle Manager 2.0. But the path is clear now.

Is there a way you could make the path clear to me too? I just don't get it. Sure, I can imagine in principle it's possible. But I don't see how you can already see it's certain. Is there something you can share that will allow me to see why it's certain?


Transformative tech: cars. If you invested in the early movers you very likely lost your money.

Transformative tech: personal computers. If you invested in early movers you missed microsoft and you very likely lost your money.

Transformative tech: Internet/web, dotcom boom. Google, facebook, twitter were not investable or did not even exist even up to when it popped.

Whether you believe AI is going to be massively transformative to the modern economy on a scale with cars or personal computing is actually not sufficient to start investing in the 'sector' (for want of a better term).

So this is useful as a reverse indicator. Anyone investing buckets in AI is worth betting against in general. Everyone? Maybe not everyone. Maybe.

Likewise even if the blockchain sector is dead. (Is it? No clue) This does not necessarily make the technology dead. (Even if you would like it to be). The web came back and reinvented as a massive distributed surveillance machine. Who would have predicted that in the crash? Who would have wanted it? Well we got it anyway.


This seems wrong, though?

For cars the inventor is (Mercedes) Benz, the people who first mass produced it was Ford. Both seem like good investments.

PC: Who is this early mover you talk about? I guess not Apple in your mind, since that would've been a fantastic investment. Hard to think of any better.

Plus, there are hundreds if not thousands of companies along the way you could've invested in that were swallowed up by the bigger fish. That's still a good investment since you either get a good exit or shares in the bigger fish.


Survivor bias is massive here, watch out for it.

Osborne beat apple to personal computing. Facebook did not exist during the dotcom boom. So very many early car companies went broke. "But if you'd only got ford it would have been sweet as." But they weren't first. Panhard et Levassor in 1889, ford in 1901 sayeth wikipedia.[1] So now you're waiting and picking winners in the sector which is very much harder without the time machine to see what worked out in the end.

[1] https://en.wikipedia.org/wiki/History_of_the_automobile#Eras...


Of course this is survivor bias, and of course many companies went bust.

But that's what investing is. Even in a mature sector of the economy companies can go bust. That doesn't mean it's always a bad idea to invest in new and upcoming technologies.


>Whether you believe AI is going to be massively transformative to the modern economy on a scale with cars or personal computing is actually not sufficient to start investing in the 'sector'

Note that "not sufficient" does not mean something else and particularly not "never do it." You've still got to pick winners if you want to succeed and that is likely a difficult task if history is any guide. Good luck.


This is hindsight bias. You know the names of the ones history remembers.


You can always find some earlier example of a tech that lost. But if you invested in the first company to mass produce a car (Ford) you'd have done well. If you invested in the first of its kind smartphone (Apple) even better. Likewise there's been a lot of early failed AI startups in the past 20 years that you could point to, and now OpenAI is hitting it out of the park, expecting a billion in revenue next year on a pure AI consumer product. I think that shows this time is different.


But each revolution is a multiplicator. A company making a billion a year is dime a dozen today. If they really were onto AI, then they’d be projected to make a trillion a year.


this is another way of saying "If you invested in the first one to be successful." Well yeah. Obviously. Time machines work great for investment decisions.

edit:

Whether it was first or not the nokia N95 was a smartphone and was definitely before the iphone. Forgotten like every earlier loser I guess.


The point is that there is no way to tell whether a company will become one of the "early failed companies" vs "the first one to be successful". Your strategy is not actionable.


"Don't throw money at a new sector of transformative tech looking for first mover advantage because you'll lose."

Seems actionable to me, was it not clear?

You've got to do a lot more, you've got to pick ford will win. Even if you're 100% the tech is massively transformstive. The winners to pick may not exist yet.


Isn't a lot of the cash coming from funds? Funds that diversify but still can pump billions into this sector without breaking a sweat. Individual investors don't have a lot of opportunity to buy in anyway, and they usually know the risks. But funds can blindly pump memes alongside modest strategies and still return on investment overall; and they aren't left standing when the music stops.


I guess the time to invest in dot coms would have been around 2002. After the hype wave collapsed and you can see which businesses are hanging in and providing value.


We use the tech the blockchain sector was based on every day. The program is called git.


What I find interesting about this wave of tech is not just how useful, but how accessible it is. Image generators like SD pretty much work out of the box on a lot of consumer-grade hardware, LLMs might be a bit more of a stretch but still doable (haven't tried that yet, though). It's quite unusual, compare that to how inaccessible the first computers were.

Not sure I would love this fact if I was an AI investor, but for the rest of us, it's just a blessing. Let's hope it stays that way by supporting researchers/companies who do share their weights, and being mindful of the CEOs telling lawmakers that only they should be allowed to do matrix multiplications (not saying we don't need any regulation though). Those tools undeniably do create value, maybe not for every investor, but for countless users. And the investors should understand the risks, my guess is that if you'd invested in "cars" around 1900, chances that you'd have lost your money would have been quite high, even though your idea might have been right in principle.


Routinely articles like these are written.

I find them superfluous and a symptom of the human tendency to never sit down and reflect about stuff.

Grifts prey on the ignorant, duping them through an informational gap.

As such, grifts flourish in new and uncertain environments. Environments where expert opinion is still forming and is badly communicated, and scammers can sound knowledgeable to the average person without actually knowing anything.

Now that experts have pushed against crypto and the whole field has become clearer in what can be done with it and what can't, it's obvious that grifters are moving to the new uncharted hotness: "AI". Writing articles like this, always talking about the specific events and never the global trend, does not help the wheel of grifting stop spinning.


Was this written by GPT? It feels very Trained On Hacker News.


Yes, this was in fact generated by a GPT model I specifically tuned for HN and that I managed to run inside my brain with minimal latency and power requirements.

[EDIT]

In conclusion


You should have added an "in conclusion,". Your meatGPT model needs more training.


You're right, I fixed the model


I was a complete believer in AI when chatGPT first dropped. The tech seemed revolutionary. GPT-4 even helped me write a ton of code for an app.

But if you ask me now, I feel that the AI revolution is a little overstated. The tech, while incredibly good, is not really ready for large scale adoption. Individuals and hobbyists might benefit from it, but for large enterprises and serious applications, it's too inconsistent and unreliable.

All I can see it accomplishing is pushing out the lowest end of the content/code creation totem pole. That's nice, but it's not nearly the "intelligence revolution" the promoters have been promising.


Agree, and would add that I’m sensing the bigger value will be in analysis of huge, complex, noisy data sets. Most people don’t have those, so it’s not a widely accessible benefit.

Sorting through legal discovery documents is a good example. A team of smart, trained, observant JDs will do a pretty good job given a few weeks to plough through it all; a reasonably tuned AI should be able produce similar value (even if not identical results) for a fraction of the cost and do it overnight.


Summary: Investors don't know where to put their money and they're scared


At least I can see more concrete applications than I did with the Web 3.0 bullshit (NFT, metaverse and other buzzwords).


I feel like everyone was jumping at the bit for x, y, z technologies to be the best big ground floor thing, then chatGPT came along, the x,y,z techs were getting hit hard the past year by bad luck, regulatory concerns, etc, and now that people lost their asses on that they're afraid ai will be the same, except I never really got value from nft, Blockchain, etc.... most people I know don't even know what all that is (non tech people, anyways).

I know plenty of computer barely literate people who use chatGPT and other generative ai tools daily.

The biggest problems I see though are openai and Microsoft essentially competing with people using the API to create products. If you build an app using openai and they add every feature that sets your app apart, you might as well throw in the towel. investors who bought into your startup when you thought there was a moat will lose.


IMO AI is more underrated than overhyped. The scale of value that AI can bring maybe larger than what internet brought. But, product design and engineering havn't caught up with the science yet. I think we are too narrowly looking at AI. LLMs are cool, but investors should look beyond that. ChatGPT wrappers aren't the next big thing. The fact that LLMs and image generation models work as good as they do now, should give investors a signal that the science of AI is approaching a tipping point where it's finally good enough to be incorporated into products. I see potential in 10 years time for a new FAANG, 5 trillion dollar companies with heavy reliance on AI that bring automation to various aspects of our lives.


I agree. LLMs have a ton of unrealised applications in business. Imagine training one on your company wiki and chat history.

Barely any companies have done that yet because of legal and security concerns and because it isn't easy to do yet, but that will change.

It's not going to be long before someone makes and end-to-end speech to speech model. A single model that incorporates speech recognition, LLM and speech synthesis. In fact I'm really surprised it hasn't happened already because it's such an obvious thing to try. That's going to blow people's minds.


Yes that's probably true. The article makes a parallel between the current AI hype and crypto, but there's a huge difference. Crypto didn't bring any benefit to anyone and didn't do anything one couldn't do before (with orders of magnitude better efficiency and security).

The current situation is more like the early dot-com boom of the 2000s; Webvan and pets.com or Altavista were ill-executed but they weren't stupid ideas. It was then that Amazon and Google were founded.


I'm not a skeptic per se (I'm impressed by generative AI and LLMs) but I can't avoid to notice that the coding AI of these last months didn't lead to an explosion of software. AFAIK I haven't see any non trivial software that have been programmed either completely by AI or with the majority of the work done by AI.


Firmly in the skeptic camp, here to say people are love bombing AI to make money, not because LLM are a gateway to AGI and AGI doesn't have to appear for people to make money.

Belief in AGI is useful to sell stock and IPO. Few serious researchers in academia see AGI in what's going on, or even roadsigns. Look at the language Hinton uses more closely.


he compared the latest AI advances to the invention of the wheel or discovery of fire. That sounds pretty cut and dry to me. This isn't a Nutjob, this is basically the father of AI saying even he's afraid of what might be wrought.


The skeptics will be wrong this time. LLMs will be the biggest tech revolution since smartphones. It's something that literally every single person can find a use for.


I don't think LLMs will be, LLMs are entry level, embodied multi modal systems even if the body is just virtual or a simulation. LLMs, stable diffusion, text to speech and vice versa, image recognition, tactile understanding, other 'senses'we could imbue in it. LLMs definitely are amazing but they're only a piece of the real revolution coming.


Regardless of wether it's the right idea right now, I am convinced that AI is at least not he complete vapourware that blockchain was. That really was some useless hype. There's something to show for and real applications, from self driving cars to other smart systems. Classifiers are everywhere.

I interact with chatgpt regularly. It's in my smartphone classifying my photos. I don't know when I have ever interacted with a blockchain.


Money is always pouring into something shiny (looking from a limited perspective) thing.

Probably too much money is with clueless but greedy and lazy (to discover) people?


The move from crypto to artificial intelligence has fueled the markets this year, but some are questioning how much of it is real.


At least ai seems significantly more useful than crypto.


I feel both things are not really ready for primetime, but i agree AI is way more useful.


Indeed. Crypto when done to its philosophy is to gain financial and market freedom for businesses and people like you and I. Sadly, there's been a lot of co-opting and from that, a lot of uneducated people who will call anything that empowers individuals to be "shady" and "criminal".

Alas, it would appear there are no such things as fundamental human rights, only laws, according to these people.


Crypto is 15 years old and still hasn't provided anything useful for normal people.

The current generation of AI tools is barely a year old and we have seen so much progress.


AI is undoubtedly useful, perhaps not revolutionary as heralded.

Crypto was never anything else than a grift. The only true feature of de-fi is to evade financial regulators, for a time, and enable large scale movements of shady money.


Nice contradiction.

> Crypto was never anything else than a grift.

Ah, alright then, I guess it's not useful for anything.

> evade financial regulators

Sounds useful to me!

> enable large scale movements of shady money.

Large scale movements of money? Super useful!


>> Crypto was never anything else than a grift.

>Ah, alright then, I guess it's not useful for anything.

No. It's a simple English sentence: it means it's not useful for anything else other than grifting. The grifters find it particularly useful, just like MLMs, traditional ponzi schemes, HYIPs etc. But that doesn't mean any of those grifts are "useful".

> Nice contradiction.

Only for the monkey jpeg brigade, who feel that the artificial demand from criminals laundering money and posing as legitimate crypto investors is a feature, not an unacceptable bug.

If the only ones using it are criminals, then it's no longer useful even for them, because the whole point is to claim legitimate crypto profits.


> it's not useful for anything else other than grifting.

I've paid for my domains and vps with crypto, I can pay for my search engine, send donations to non-profits I support, so there's your use case. I don't even have to hand over my personal banking information, so no worrying about data breaches, another use case!

I'm sure you would consider paying for a legal service to be a legitimate use case.

> monkey jpeg brigade

Whoa, name calling, well that always shows the strength of your argument.

> who feel that the artificial demand from criminals laundering money and posing as legitimate crypto investors is a feature

Didn't realize I was a NFT supporter by calling out your contradiction. Oh, and a straw man too, always nice to see. Also, technology can be abused? Who would have thought?

Its a shame that primary users are the criminal masterminds who've apparently become art connoisseurs of monkey jpegs.

I don't seem to remember supporting "laundering money". Do you mean in the same way that banking at HSBC, BYN Mellon, Deutsche, Swedbank, Danke and others is supporting money laundering? [1-6] Or how banking with BoA is supporting fraud? [7]

Although its nice to see that you believe in "legitimate crypto investors".

1: https://archive.nytimes.com/dealbook.nytimes.com/2014/06/30/... 2:https://www.bloomberg.com/news/articles/2019-02-20/swedbank-... 3:https://www.acfcs.org/news/419424/Danske-Bank-reveals-Estoni... 4: https://www.nytimes.com/2005/11/09/business/bank-settles-us-... 5:https://www.theguardian.com/business/2019/apr/17/deutsche-ba... 6:https://www.investopedia.com/stock-analysis/2013/investing-n... 7:https://www.justice.gov/opa/pr/bank-america-pay-1665-billion...

> If the only ones using it are criminals, then it's no longer useful even for them, because the whole point is to claim legitimate crypto profits.

Uh, sure? Sounds good man.


Is this any different to "blockchain is the solution to everything", "big data", "cloud" etc etc?


It's all a grift. The whole economy is grifters grifting grifters, a game of musical chairs that's going on ever since the stock market was first invented. Probably even before.


I tried to get a really tiny crack in my windshield repaired yesterday. Something coincidentally went wrong during repair, so I have an appointment to get the entire windshield replaced next week!


... Cool story bro?

Wrong comment?


I am not going to shed a tear for those VCs with dumb money who back the truck up and dump their cash into a "startup" that pivoted from blockchain to AI last week.


The "AI" bubble has some similarities but also important differences from the "crypto/blockchain" bubble and the brief "metaverse" mania.

The similarities are sort of obvious. The real economy is in a precarious state worldwide. Geopolitical strife, political polarization, exhausted and confused households, still reeling from the pandemic. All in a background of a deteriorating environment that either burns to ashes or is swamped by plastic. This is our real condition and there is no turnaround in sight.

Yet the "optimism" and valuations must keep up or the system will collapse for good. The reliable pony delivering tricks is the tech sector. Being unregulated/oligopolistic with massive rent extraction, operating in an entirely virtual realm and by now controlling all digital communication channels it has massive resources and opportunity to pump up every nugget into a digital gold rush and it does so shamelessly and with predictable regularity.

So what is different between these serial bubbles? crypto and metaverse require massive social and/or behavioral change. If you look at the problems that blockchain was supposed to solve, all of them would be solvable with lower tech if people actually had an interest to solve them. There far easier ways to make the monetary and financial systems more fair and honest than inventing a poor simulacra. The metaverse requires a collective migration into a fake reality. People are increasingly escapists and absurdists but strapping a heavy idiot-signaling device on your head is a virtual bridge too far for most.

"AI" is a better fit to the status quo. Grabbing any and all accessible data and algorithmic manipulation of people is already enshrined as acceptable practice ("people so much enjoy the convenience"). So imho this bubble has some legs. Which means the fall will more painful when it happens. What will burst the bubble? Regulation on data collection and possible applications is one possible balloon prick. The other one is commoditization.

Commoditization is an interesting one. If there is any silver lining in this dismal doom tech era we live through it is the fact that major information processing and communication capabilities are being built. It is conceivable that at some point these will be deployed in very different ways and with much bigger positive impact.


Well it’s more real than crypto in many ways so not a terrible gamble


Many bought crypto, few used it. Many use LLMs, few pay for them.


Modern "economy".

There are a bunch of thousands of guys with a couple to a couple hundred billion each. There are millions of people desperate to own a house and afford life. The rich guys want to be more rich. They're hiring some of the poor suckers to check how others got richer in the past. The answer is tech!

New tech is created every couple of years. People hype it up as much as they can. The rich guys give a fraction of their billions each to finance whatever seems remotely reasonable while squinting in that space, just for a chance to hit the jackpot and get more billions, maybe even a trillion, and their face onto Forbes and on TV.

The poor suckers gotta scramble. They invent all kinds of bullshit, and they sell it to the other poor suckers who advise the rich guys. Teams of specialists are created. Whole organizations. There's HR, somebody to organize team building events. Every layer spawns another layer. Lawyers, somebody to give sexual harassment trainings, someone to run the cafeteria.

Buildings are rented from the rich guys via managment companies run by the poor suckers. Every day a handful of people make it and can even buy a house! Codes of conduct are written, company values and mission statements. People pivot, jump from place to place, try to sign the best contract. Every once in a while an exec jumps ship with several hundred mil in the bank.

It's a great life. What could be better?


This isn't exactly untrue, although the numbers are off, I think, but no one seems to invented a better alternative. If you have money because you stole it from your citizens, or because you sent your serfs to die in a war, that's infinitely worse than because you did something someone else thought was valuable and they paid you for it.


Only very few people are asking for a completely different system, they usually demand fine-tuning the system so that it improves median life quality.

It used to be that the philosophy before was to have the least amount of intervention on the free market, and it's now that there should be no interventions whatsoever. Isn't that an extreme position?


There is probably no country in the world with no intervention, or even close. In fact, there are massive intervention in pretty much every country.

Actual lassez-faire economies was mostly a thing in the 19th century.

Since then, regulatations have been accumulating constantly, and total government spending increased to GDP increased from about 10% to 35-75% in most if not all western countries.


Yes, but many of these have one form or another of "take from the middle class and give to everyone else".

I wasn't arguing that governments don't do anything, it's that they don't do enough to limit the size if the parasitic rich class, while the original post was saying that this is the best system we've got.


> limit the size if the parasitic rich class

Do you consider it ok to dehumanize your fellow humans by comparing them to parasites?

Also, how can you be so sure that the metaphor even works? Do you truely believe that everyone else would be better off if the top 0.1% richest people in the world had never been born, and never started all those big companies they own?

Because that's what it means to be a "parasite". To consume resources you have no part in creating.

Or is the word "parasictic" ONLY used as a means to dehumanize a group you detest, without even being meant to be seen as a metaphor?

I don't mind having a discussion about how to best distribute the economic output of the economy, but calling one group or another parasites is not a good start.


> no one seems to invented a better alternative

There has been myriads of alternatives experimented in the 200,000+ years humans have been a thing. Many ancient societies didn't have the concept of property. Others did, but burned everything someone possessed when they died, ensuring a level playing field at each generation. Most ancient societies didn't have any hierarchy at all. The concept of anyone being the boss of anyone else is extremely recent.

We marvel at our modern world but what does it give us? Does it make us happy?


> The concept of anyone being the boss of anyone else is extremely recent.

My guts tell me it's as old as humankind. Citation?


> The freedom to abandon one’s community, knowing one will be welcomed in faraway lands; the freedom to shift back and forth between social structures, depending on the time of year; the freedom to disobey authorities without consequence – all appear to have been simply assumed among our distant ancestors, even if most people find them barely conceivable today. Humans may not have begun their history in a state of primordial innocence, but they do appear to have begun it with a self-conscious aversion to being told what to do. (...) the real puzzle is not when chiefs, or even kings and queens, first appeared, but rather when it was no longer possible simply to laugh them out of court.

The Dawn of Everything, 2020


Thanks for the citation, interesting, especially as this book is on my to-read-list.

On the other hand eeh... nice words but I don't buy this. Even animal groups has leaders and leaving/abandoning community probably meant death, not freedom. There is a reason why we think so much about what others think about us. One of the greatest survivor skill is to be able to fit into your community.


Is there any evidence of this? How do we know they felt self-conscious?


Even if you did invent a better alternative, do you think you would get a fair hearing, given how mainstream/corporate media is organised and operated?

Actually, do you think there is anyone with any substance and influence out there trying to even invent a better alternative? All I see is culture wars/left vs right nonsense, but maybe I am out of touch.


Sure, if this is the best possible of all worlds then we should be happy, but is it really? Seems defeatist to think so. Are there economies which function in a healthier way than this? Do you remember the early days of computing, before the big money got tipped off?


You need to be really blinded by cynism to not see how much better modern life is for average people or how many people benefit from advancements and scale which wouldn't be possible without billions of dollars worth of investments.


Is this accidental to the billionaires, or is this their intended outcome?

And could the investment come from alternative sources if the billionaires didn't exist?


>And could the investment come from alternative sources if the billionaires didn't exist?

In theory, sure. In practice, pretending that the current system has been making things worse or that we have a viable realistic and better alternative is flat-out wrong.


Is it OK to be content with how much better things are now than before we had modern dentistry and combustion engines, without questioning how entire macroeconomic phenomena contribute to our modern wellbeing? We improved, can we improve further?


> What could be better?

Reality. Most of tech is not funded by existing billionaires.


This. If life were as op (beautifully) put it, the world would be much better. At least the rich guy would try to innovate.

Most billionaires get rich in ways that are worthless, when they're not damaging to society. We allow governments to send people to die in wars just so their friends producing weapons will make a profit.

Most of the money around gets stolen from the little guys and given to the big guys. Think about your taxes funding banks' bailouts or inflation.

Tech is not perfect, every large organisation eventually resemble the politics and stupid games of powers you see in every government or other criminal organisation. The waste, the inefficiency, the idiotic rules.

Stay in small companies, create value for real people, maybe even start one (with no VC) if you want your chance at making some cash without going insane.


So let's say you want to start a company that makes some stupid AI cookbook and you want money to payroll people. How do you do that without investor cash?


Where did you get the impression that investor cash is tied to billionaires?


How do you define "poor"? Net worth of less than $10M? Income less than $1M?

Unless you set very strict limits for "poor" like that, the people that the ultra rich hire tend to be rather well off, or at least comfortable, themselves ( by that I mean net worth of >$1M OR income of >$100k.

Actual poor people don't built state of the art tech. At best, they work as cleaning staff or in the cafeteria of those companies. Or maybe in the assembly plant in a foreign country. (And even those may feel wealthy when compared to their friends and family.)

Those who resent the ultra rich the most tend to be those who are themselves quite comfortable, often affluent even, but really hate it when other people are even more successful than themselves.

They often pretend to care for "the poor", but really all they want is to pull down anyone more successful than themselves.


It's common to project resentment towards individuals onto someone who expresses dissatisfaction with systemic issues. We want to think on our small human scale, but our societies got so big that we can't make sense of them in those terms anymore.


> It's common to project resentment towards individuals onto someone who expresses dissatisfaction with systemic issues.

Perhaps, but it's also quite common for people who are really doing quite well to pretend to argue on behalf of actually marginalized groups when struggling for power with their opponents.

The way to tell the difference between those who genuinely want to help marginalized groups and those who just use them as pawns in a power struggle, is that they spend a similar amount of effort to help those groups with problems that do NOT in any way involve taking away power from their political opponents.


There's no struggle for power between an overworked corporate bee, and a billionaire. The latter is hundreds of thousands of times more powerful than the former.

You're right though, a lot of the time folks with office jobs don't really care about the marginalized (and who could blame them with their entire energy drained by their jobs). Your heuristic to tell the difference is... dubious.


> There's no struggle for power between an overworked corporate bee, and a billionaire.

Oh, but there is! Why do you think the "bees" try to create unions, and the billionaires try to prevent it? If the bees get their union, that means they can wrestle some amount of power away from the owner.

> and who could blame them with their entire energy drained by their jobs

Some people and some companies and in some countries, it's normal for the company to be able to drain most of the energy from the workers. In other places, there is much more work/life balance.

By heuristic, like the comment that I responded to directly above was not limited to only employer/employee relationships, but rather so called "systemic issues".

It also happens in both directions. For instance, when conservatives argue against a minimum wage, they tend to argue that a minimum wage can lead to increased unemployment. Do you think most conservatives REALLY carethat much about that part, or do they just want to avoid the minimum wage regulations?


>income of >$100k

This is barely "middle class" now, unless you live in the middle of nowhere or like a college student.

$100k/yr stopped being impressive long ago. Even the FAANG rich boys are closer to the janitor than to the CEO.


$100k/yr probably still puts you in the top 1% by personal salary per year, globaly. And about 3x above the median in the US.

I'm not arguing that it makes you rich, but you're definitely NOT "poor" if you have such an income.

Anyway, comparing to the CEO on a linear scale is kind of absurd. Anyone making less than half of what the CEO does is closer to the janitor than the CEO. Still, if you make exactly half as much as the CEO of these companies, you're still objectively rich.

And why do you think you should be closer to the CEO than the janitor in the first place? Do you think your job is that much more important than making sure the power is on and the toilets are not stuck?

If someone is gunuine about protecting the "poor", they should first ensure that the janitors and cafeteria workers are not poor, before they demand raises that would take them even further away from those that really ARE poor.

It appears to be quite typical for those claiming to be poor (while having an above-median income and wealth) to have just the attitude towards those below them that they blame the "rich" for.


>I'm not arguing that it makes you rich, but you're definitely NOT "poor" if you have such an income.

It depends on where you live. In the Bay Area, you can earn $100k/yr and qualify for food assistance. Housing is the biggest expense, and it's only getting worse.

https://www.kron4.com/news/bay-area/100k-a-year-is-low-incom...


"The stock is the product." Jack Barker


You started a hell of a flamewar with this. Could you please not do that on HN, regardless of how bad things are or you feel they are? We're trying for something different here, such as not burning to a crisp.

https://news.ycombinator.com/newsguidelines.html


Things are always changing.

The US and its mindless dynamics definately could use a kick in the teeth to speed up the change. But its happening with interest rates rising, dedollarization etc. Some rich folk who think nothing is changing and their behavior doesnt need to change, their story wont end well.

They dont have as much control over anything as they think. In fact I think rich people are going to take the biggest hit mentally, financially, socially cause its very precarious having wealth without control over what happens tomorrow morning.


It's a bit weird to label millions of people, including presumably yourself, as 'poor suckers', is this a joke?


It's an oxymoron, that highlights elite's dismissive attitudes towards the working class. You might stumble upon oxymorons in all kinds of non-technical literature in various languages.


Can you explain how that is an 'oxymoron'? because I'm having trouble squaring the intended meaning with the dictionary definition.


Poor is an expression of compassion, and sucker is dismissive. They're contradictory.


'Poor' is not usually considered an expression of 'compassion' in most contexts, certainly not on HN.

Are you learning English? If so, it's best to follow the dictionary meaning and examples.

For example, when HN users write 'poor engineer', 'poor dev', 'poor software', 'poor working environment', 'poor commute', etc... they most likely don't intend to attach any connotation of compassion.

I think most readers will interpret 'poor suckers' literally as in 'suckers' who are lacking wealth/income/means/etc...


> It's a bit weird to label millions of people, including presumably yourself, as 'poor suckers', is this a joke?

Your average millionaire is closer in wealth to the average homeless person than he is to the average billionaire.

A single catastrophic event - be it some kid getting paralyzed on your backyard trampoline, an investment gone wrong, a company going under because it can't compete with Chinese government-backed price dumping, disaster that rips apart one's home that has gotten uninsurable, cancer, or early-onset dementia - can wipe out their wealth in the blink of an eye, whereas the billionaire can afford to sink half a billion into a yacht and still have not made a dent in his wealth.


Money is not wealth. Once you have more of it than is necessary for basic needs, you have to figure out a way to get rid of it and exchange it for real wealth. Investing it in "tech" is an easy and dumb solution to the problem.


Also wealth is not money... Owning million of unliquid asset is not money.


The problem is that the 0.1% don't seem to really understand this. If they did we would se massive investments in basic science and medicine from them. But we don't. Instead most of them seem to treat their wealth as internet points to compare to other super rich.


They want investments with short term returns… and basic science isn't that.


But that's exactly because they're being stupid. Only basic science or medicine has any chance of making any significant impact on their lives. And most of the potential basic science impact will do so via medicine. Being a billionaire won't protect you from cancer and multi resistant bacteria.

They are missing an opportunity to increase their life span and life quality and do the same for their children. Only very few billionaires understand this, Bill Gates being maybe the most obvious example.


Bill Gates is a lot of talk but I suspect he's using his funds to push microsoft onto poor countries that would otherwise default to linux.


You are very mistaken.


This reads like Sin City. Please do more Noir style writing.


This isn’t noir writing, I found it an accurate if a bit cynical description of reality


Nice summary.


[flagged]


> What could be better is to be happy with what you have.

This statement requires further qualification; for example, you certainly wouldn’t say that to someone in an abusive relationship - or someone living in poverty.


It does not. I obviously did not say or imply that everybody can be happy with what they have.

People who have more than you existing doesn't have to be a reason that you're unhappy though.


> I obviously did not say or imply that everybody can be happy with what they have

Versus:

> What could be better is to be happy with what you have.

Contradiction detected.


I'd love to help but I'm not sure what words you're struggling with. "Could", perhaps? Or is it just the hypothetical nature of the question and answer that is not talking about a specific person's complete life situation, but responding to someone moaning about how terrible life is because "the rich guys", that bypassed you?


The rich man is not he who has a lot, but he who needs little...


[flagged]


[flagged]


[flagged]


[dead]


[flagged]


You know it's time to take a break from the internet when you get so upset that you start with the keyboard tough guy shtick. Hope you feel better soon.


If you don't have the courage to voice your opinions in person, you're just a coward that knows very well has socially unacceptable opinions :)


That people don't have to be miserable with greed and envy of people who have more than they do? It's a very common and socially acceptable opinion actually.


Ah so you're a troll, because that's NOT what you previously wrote.


You mean that's not what you previously misrepresented my position to be.


[flagged]


[flagged]


[flagged]


[flagged]


[flagged]


[flagged]


The way you carried on in this flamewar was beyond the pale. As we've warned you multiple times before and you've continued to break the site guidelines badly, I've banned the account. Seriously not cool.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


[flagged]


The way you carried on in this flamewar was also beyond the pale, we've also warned you more than once before, and you've also continued to break the site guidelines badly. I've therefore banned this account as well. Seriously not cool.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


So, we few have few variants to chose:

a) everybody are poor;

b) some are poor, some are in the middle; some are rich.

Variant (b) is bad because some are poor.


Actually compared to the economies of past societies, the modern economy in the West is amazing. If you work hard, have a little bit above the average intelligence and potentially a bit of luck you can make a lot of money. If not I guess you can make excuses.

The simplest way to organize a group of people, animals, organisms is to make one of them leader and have the rest follow him. This is the way our ancestors behaved for hundreds of millions of years and it's hardwired into our brains.


The fact that you think our ancestors go back hundreds of millions of years…is wild.


How far do you think our ancestors go?


Homo sapiens are about 300k, homo erectus about 2 million and if we are going back to chimpanzees and gorillas, 10 million max.


I'm sure you're completely unaware of the fact that your world view comes from protestant religion, which equates richness with god's favour, thus ending up considering poor people as worse sinners who don't deserve a better life due to their moral failings.

People of other religions and cultures don't necessarily share your same faith (I realise you think you're being completely rational, but to an external observer you are not).


I'm an atheist, but this is what I learned in school (by Protestant Christian Socialist teachers):

  Matthew 19:23-26 American Standard Version (ASV)
  And Jesus said unto his disciples, Verily I say unto you, It is hard 
  for a rich man to enter into the kingdom of heaven. And again I say 
  unto you, It is easier for a camel to go through a needle's eye, than 
  for a rich man to enter into the kingdom of God.


In my experience that bit is more forgotten in protestant countries rather than catholic ones. As in, in catholic ones rich people aren't seen as model of virtue and everybody knows that "hard work" won't get you there.


> In my experience that bit is more forgotten in protestant countries rather than catholic ones.

I think you're misunderstanding the "hard work" part. Unlike (perhaps) in the US, the work ethic part of northern/Germanic countries is not about working super hard to get rich, but rather to do the job with the integrity and effort that is reasonable, given your health and abilities.

The typical reward is not to get super rich quickly. But if you uphold the ideal, you deserve respect, even if you're cleaning staff or the janitor.

Similarly, NOT living up to the ethical standards will be damning regardless of social status. Cheating and corruption comes with harsh social punishment, especially for those near the top.

Remember that we're talking about the part of the world that pratically invented both Social Democracy and the Nordic Model.

Now, I also have family and contacts in various places in Southern Europe and South East Asia, some of which as very wealthy. What these ALL have in common is that various types of corruption and plutocracy are just facts of life that are taken for granted, and nobody cares.

Now my own hypothesis is that religion plays only a small role in this difference, and that this is mostly a consequence of the difficulty of surviving as a farmer in Northern Europe during the medieaval era, and especially in Scandinavia.

People used to live on isolated farms or in small villages, where the farms would barely produce enough food to sustain the family of the farmer. Those who did not work at least moderately hard usually would not survive, and there was little left over to give to either the poor or the nobility.

This led to an egalitarian outlook, where people who wanted to live off the produce of others were not tolerated easily, regardless of whether they were beggars, thieves or barons. And since production was low, there was little to tax or steal, anyway.

In warmer climates, farm yields were much greater, which caused success to be much more about social relationships than hard work at the farm.


AI investment is actually down recently, looks like the hype is wearing off since most of the companies funded were just wrapping OpenAI APIs. I will copy paste a post I submitted before regarding a similar issue.

https://twitter.com/0xSamHogan/status/1680725207898816512

Nitter: https://nitter.net/0xSamHogan/status/1680725207898816512#m

---

6 months ago it looked like AI / LLMs were going to bring a much needed revival to the venture startup ecosystem after a few tough years.

With companies like Jasper starting to slow down, it’s looking like this may not be the case.

Right now there are 2 clear winners, a handful of losers, and a small group of moonshots that seem promising.

Let’s start with the losers.

Companies like Jasper and the VCs that back them are the biggest losers right now. Jasper raised >$100M at a 10-figure valuation for what is essentially a generic, thin wrapper around OpenAI. Their UX and brand are good, but not great, and competition from companies building differentiated products specifically for high-value niches are making it very hard to grow with such a generic product. I’m not sure how this pans out but VC’s will likely lose their money.

The other category of losers are the VC-backed teams building at the application layer that raised $250K-25M in Dec - March on the back of the chatbot craze with the expectation that they would be able to sell to later-stage and enterprise companies. These startups typically have products that are more focused than something very generic like Jasper, but still don't have a real technology moat; the products are easy to copy.

Executives at enterprise companies are excited about AI, and have been vocal about this from the beginning. This led a lot of founders and VC's to believe these companies would make good first customers. What the startups building for these companies failed to realize is just how aligned and savvy executives and the engineers they manage would be at quickly getting AI into production using open-source tools. An engineering leader would rather spin up their own @LangChainAI and @trychroma infrastructure for free and build tech themselves than buy something from a new, unproven startup (and maybe pick up a promotion along the way).

In short, large companies are opting to write their own AI success stories rather than being a part of the growth metrics a new AI startup needs to raise their next round.

(This is part of an ongoing shift in the way technology is adopted; I'll discuss this in a post next week.)

This brings us to our first group of winners — established companies and market incumbents. Most of them had little trouble adding AI into their products or hacking together some sort of "chat-your-docs" application internally for employee use. This came as a surprise to me. Most of these companies seemed to be asleep at the wheel for years. They somehow woke up and have been able to successfully navigate the LLM craze with ample dexterity.

There are two causes for this:

1. Getting AI right is a life or death proposition for many of these companies and their executives; failure here would mean a slow death over the next several years. They can't risk putting their future in the hands of a new startup that could fail and would rather lead projects internally to make absolutely sure things go as intended.

2. There is a certain amount of kick-ass wafting through halls of the C-Suite right now. Ambitious projects are being green-lit and supported in ways they weren't a few years ago. I think we owe this in part to @elonmusk reminding us of what is possible when a small group of smart people are highly motivated to get things done. Reduce red-tape, increase personal responsibility, and watch the magic happen.

Our second group of winners live on the opposite side of this spectrum; indie devs and solopreneurs. These small, often one-man outfits do not raise outside capital or build big teams. They're advantage is their small size and ability to move very quickly with low overhead. They build niche products for niche markets, which they often dominate. The goal is build a saas product (or multiple) that generates ~$10k/mo in relatively passive income. This is sometimes called "mirco-saas."

These are the @levelsio's and @dannypostmaa's of the world. They are part software devs, part content marketers, and full-time modern internet businessmen. They answer to no one except the markets and their own intuition.

This is the biggest group of winners right now. Unconstrained by the need for a $1B+ exit or the goal of $100MM ARR, they build and launch products in rapid-fire fashion, iterating until PMF and cashflow, and moving on to the next. They ruthlessly shutdown products that are not performing.

LLMs and text-to-image models a la Stable Diffusion have been a boon for these entrepreneurs, and I personally know of dozens of successful (keeping in mind their definition of successful) apps that were started less than 6 months ago. The lifestyle and freedom these endeavors afford to those that perform well is also quite enticing.

I think we will continue to see the number of successful micro-saas AI apps grow in the next 12 months. This could possibly become one of the biggest cohorts creating real value with this technology.

The last group I want to talk about are the AI Moonshots — companies that are fundamentally re-imagining an entire industry from the ground up. Generally, these companies are VC-backed and building products that have the potential to redefine how a small group of highly-skilled humans interact with and are assisted by technology. It's too early to tell if they'll be successful or not; early prototypes have been compelling. This is certainly the most exciting segment to watch.

A few companies I would put in this group are:

1. https://cursor.so - an AI-first code editor that could very well change how software is written.

2. https://harvey.ai - AI for legal practices

3. https://runwayml.com - an AI-powered video editor

This is an incomplete list, but overall I think the Moonshot category needs to grow massively if we're going to see the AI-powered future we've all been hoping for.

If you're a founder in the $250K-25M raised category and are having a hard time finding PMF for your chatbot or LLMOps company, it may be time to consider pivoting to something more ambitious.

Lets recap:

1. VC-backed companies are having a hard time. The more money a company raised, the more pain they're feeling.

2. Incumbents and market leaders are quickly become adept at deploying cutting-edge AI using internal teams and open-source, off-the-shelf technology, cutting out what seemed to be good opportunities for VC-backed startups.

3. Indie devs are building small, cash-flowing businesses by quickly shipping niche AI-powered products in niche markets.

4. A small number of promising Moonshot companies with unproven technology hold the most potential for VC-sized returns.

It's still early. This landscape will continue to change as new foundational models are released and toolchains improve. I'm sure you can find counter examples to everything I've written about here. Put them in the comments for others to see.

And just to be upfront about this, I fall squarely into the "raised $250K-25M without PMF" category.


I'd add that actually using llms to add surprisingly powerful or complex features is extremely easy as a dev. It's turned things that would have needed large ML investment and expertise into a few rest API calls. The other vital thing imo is the pay per token pricing with no minimum, and a simple UI for prototyping - you can build out a demo paying personally and then get a company account.


> Right now there are 2 clear winners, a handful of losers, and a small group of moonshots that seem promising.

There are going to be much more losers than winners than people realize in this AI race to zero.


I have no idea what your comment about Musk is referencing. I can't think of a less inspiring figure.


This is not my post, just a tweet I thought was interesting. Don't focus on Musk or whoever else, the main point they're making about how AI success is bimodal is, I think, quite insightful.


You can't think of a less inspiring figure, than the man who made EVs mainstream and built the most successful space company?


The man who takes credit for those things? Things which are the work of others? No, I do not find that inspiring.


Would SpaceX and Tesla exist without Musk?


Yes? Most likely as better than what they are now. Musk is a net negative.


Yeah the two are not comparable.


Right. We probably will actually get something substantial out of investment in AI even if it falls far short of the lofty promises.


Indeed. There was never anything in crypto.


Hindsight is awesome.


With hindsight, I was correct about Bitcoin being worse than useless when I decided not to bother using my gaming PC and free electricity to mine it in 2011. With hindsight, this could also be seen as a mistake. Although I'm pretty sure I would have sold it at $25 and considered that an unreasonable gain.


Given the absurd pace of open source, I'm not sure if the money itself is in "AI", maybe certain applications but if I can run something 95% as good as ChatGTP-4 on my home desktop in a year or so, then I'm not going to be paying for any "AI" solutions.


In a year it may not be legal to run that GPT-4 on your desktop. Every nation sees the writing on the wall.


How do you suppose this is enforced now ? Lol


There's no enforcement right now because there's no law for it yet. Laws can change, nearly every legislature on the globe is working on this.


It is a grift regardless of usefulness. "But it is useful" is hardly a justification for destroying the planet [0] [1] [2] without any viable efficient methods available today in training, fine-tuning and running it on every inference on tons of data centers.

All for so-called companies claiming to be 'AI companies' when they cannot even read or implement a technical paper and are just wrapping over someone elses API and immediately they are 'AI companies'. When it goes down they start crying over it 'not working'.

That is a confidence trick which is the definition of a grift and most replying here with excuses of "But it is useful" are likely to be underwater over their investments in inflated ChatGPT wrapper companies.

[0] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...

[1] https://www.independent.co.uk/tech/chatgpt-data-centre-water...

[2] https://www.theguardian.com/technology/2023/jun/08/artificia...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: