Hacker News new | past | comments | ask | show | jobs | submit login
What Neeva's quiet exit tells us about the future of AI startups (supervised.news)
121 points by bobvanluijt on May 26, 2023 | hide | past | favorite | 89 comments



I believe that Google Memo answered this entire line of questioning very well:

https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...

"We Have No Moat, And Neither Does *"

The problem with building AI products is that as long as you don't know why or how it works, your competition can just imitate the surface-visible results and it's as good as your product because they also have no clue about why or how it works, just like you.


We've got to stop calling this a "Google Memo." That's a false narrative. It's just a random doc written by one of 140000+ employees.

> Google has been contacted for comment but it is understood that the document is not an official company memo. [1]

There are moats to products, but less so to pure language models trained on the same web-scale scraped data that many share.

Not all data is readily available to language models, and integration can be difficult.

A company that specializes in say AI for trash sorting likely still has a moat.

Microsoft integrating AI into Windows still has a moat (for Windows).

GPT-4 is ~200 Elo better than the next best semi-public Vicuna-13B in Chatbot Arena [2]. That is a non-zero moat - perhaps due to hosting larger models, training data, licensing, output postprocessing, etc.

[1] https://www.theguardian.com/technology/2023/may/05/google-en...

[2] https://lmsys.org/blog/2023-05-25-leaderboard/


> GPT-4 is ~200 Elo better than the next best semi-public Vicuna-13B in Chatbot Arena [2]. That is a non-zero moat

Its a non-zero advantage.

A moat is something that inhibits someone from closing an advantage.

(Also, its odd that the biggest models, outside of the big vendor centralized ones, they are testing are 13B-14B when 30B-ish and 65B-ish versions exist.)


Perhaps it's not a moat.

However, if the advantage is due to things like inference infrastructure to support a massive model, that isn't easy to duplicate.

I would also say that the quality of these smaller models are good, but we also may not be measuring them correctly. Recent papers suggest that these smaller LMs dont fully capture ChatGPT quality in ways that may not have appeared with crowd worker ratings [1]. It's easy to have your inputs be inside a happy distribution for a paper but fail in the real world in ways that GPT-4 doesnt.

Lmsys would love to compare with bigger models but have limited resources. Contributions are welcome [2]

[1] https://arxiv.org/abs/2305.15717

[2] https://lmsys.org/blog/2023-05-25-leaderboard/#next-steps


OpenAI doesn’t make their own hardware, even in the fabless sense of “make.” So in what way is inference infrastructure a moat?


The moat (or the water/crocodiles in it) is the content that the company gathers in relation to the offering that is being defended. Microsoft has Github, which is a source of code that the model can operate upon, as well as the interactions/queries with the users. OpenAI is playing around with sharing content because of this. They want to build a moat and will use us to do it.

If someone just has an approach to solving the problem, i.e. code that does this that and the other, then there is no moat.


> We've got to stop calling this a "Google Memo." That's a false narrative. It's just a random doc written by one of 140000+ employees.

So it's a.. memorandum?


yes but "{person name} memo" is not the same as "Google Memo" (published and endorsed by the company)


> (published and endorsed by the company)

this is not implied by the name "Google memo".


It is a memo written by a Googler on internal systems that makes it a Google memo. Companies almost never officially sponsor internal memos escaping without PR and legal having a crack at the content.

What I'm really curious about is why you think this isn't a Google Memo, and why you think that's a false narrative.


There is a huge difference between a leadership-endorsed strategy memo like the Nokia/Elop "Burning Platform" memo (1), and a memo by a random engineer like Steve Yegge's Platform rant (2).

Random engineer memos can certainly still be influential, but they do not dictate company direction.

(1) https://www.engadget.com/2011-02-08-nokia-ceo-stephen-elop-r...

(2) https://gist.github.com/chitchcock/1281611


To me, "memo" implies it being for the business. If this was written by leadership or intended to be some sort of instruction to others, then I think it would be more reasonable to call it that.

But it wasn't. It was an opinion written down by an individual, for no other reason than to share their personal opinion. It was closer to an HN comment, not a document necessary as part of business operations.


Anywhere I've worked, junior engineers have put together ridiculous memorandums, especially when they're on the way out. That does not mean that the opinions of the author align with the company's.


> A company that specializes in say AI for trash sorting likely still has a moat

Is the most the trash data


(250,000 employees. Just about half of staff is TVC)


for others, TVC apparently means "Temporary, Vendor, or Contractor", aka non-FTE


for others, FTE apparently means "Full-Time Employee", aka non-TVC


was just gonna ask, thanks


Not random. A 'researcher at Google'.


Indeed. I have termed this the coming "AI clone wars".

"Anything you create can be created without investment cost while also being unique in design as well as delivering the same function or experience"

From - https://dakara.substack.com/p/ai-and-the-end-to-all-things


The droids are clones, delicious


I believe there is better opportunity from focusing hard on specific niche industry verticals and developing AI assisted workflows that make their operations more efficient.

Those will be good businesses, and founders will do well, though they may not be billion dollar businesses. Their moat will be industry comprehension and then their integration and embedding into business operations.

That does leave a question as to the purpose of VC in AI though.


That googler has no real world experience whatsoever.

Google owns several platforms used by billions. If that is not the moat, what is?

Google has moats. They just suck at executing. That is why they are not dying, still making more and more money, but looking like they are losing. That is the benefit of having moats.


That memo was specifically about Bard, Google's AI chatbot, not about Google in general. Google has plenty of other moats (they tend to build products with synergistic effects), and not having a moat there was really freaky to Googlers.

At some point, AI people are going to learn that most AI capabilities are usually features, not products.


Still their moats are helpful in distributing their AI.


The fact that anyone at Google wrote this memo tells you more about Google than it does about AI (or anything).

What’s the moat for Apple (after all, they’re all phones and computers)?

What’s the moat for Microsoft (after all Google and insert thousands of other competitors offer docs, sheets, etc)?

What’s the moat for Facebook (after all, it’s just a social network)?

What’s the moat for Google search (it’s just another search box)?

The moat isn’t the product, it’s the business. Every one of these companies built a business that served customers well (people who want the best phone, company IT departments, advertisers, and advertisers again). The moat is a business so good at serving its customers that you can afford to offer a better/cheaper/faster product than others because you serve a well paying customer the best. Done well, your moat is then the momentum of being the default option.


moat is real. And it definitely is the product.

Long time back when Google first came about.. I remember reflexively switching from the default Yahoo browser to Google. It felt painful even trying out Yahoo Search Engine... I was just a kid back then and even I wondered why I was doing this.. Google's Page Rank algorithm was simply superior..

Paypal's moat was being able to handle cyber criminals/hacking.. their competitors were getting wiped out because they were losing millions of dollars due to attacks and they did not have the technical know how..

pg's viaweb's moat was being able to rapidly add features because pg grokked lisp pretty hard at that time..

Google search results were far superior.

Same thing with Apple.. I never had a mac/iphone crash on me ever.. they just worked..

For Facebook.. the killer feature was the wall.. that and the network effects..


Yep. You have to have a product to get in the game and win customers and these are all examples of a killer product. And then you build a business serving a customer around them that - even when your features are cloned - create a sustainable moat.

The wall was cloned, but the wall (ultimately moreso Newsfeed) enabled you to become one of the biggest sites on the internet - which turned out to be really valuable to advertisers. Which in turn, allowed you to invest in features, buy potential competitors and invest in AI research teams that open source models that make you competitive in AI. The sum of those things makes you a $671B company throwing off enormous sums of cash. That turns out to be a pretty sustainable moat for a while - long after your product becomes table stakes.

The problem with thinking that this is impossible in AI is because it ignores all of the other times it’s been done. Facebook was JUST a social network that would “be a fad” and “never take off”. It was “just another website” that “anyone could build”. The difference is they built it. And then they built on it.

Same thing will happen here. And now the question is, will it be one of the current incumbents that crack it or does a newcomer have a shot.


I hadn't heard of "viaweb" until now. Having just read the (tiny) wikipedia entry for it, I'm getting intense radio-on-internet vibes from it.

> Viaweb's example has been influential in Silicon Valley's entrepreneurial culture, largely due to Graham's widely read essays[11] and his subsequent career as a successful venture capitalist.[12]

Basically - rich guy tricked some other rich people early on in their career - attributes future success to this early "success" despite it probably being a bad idea and having less to do with innovation and more to do with big-dicking clueless investors.


Many things from those days were like that but there was no infrastructure for anything in the 90s at that time and viaweb added e-commerce. You had to write a bucket load of Perl cgi-bin horror to do this yourself.

PG should edit that Wikipedia entry and fix that last part to read less like what you imply. However, if you check the timeline, you can come to the same conclusion.


For me it was altavista to google!


It’s monopoly and anti-competitive business practices all the way down. That’s Google’s moat and has been for a decade.

Under any sane regulatory regime they never would have been allowed to get to this point but that didn’t happen.

The only problem here is they aren’t going to be able to replicate that for the next big tech shift.

The moat isn’t “the product” the only great product they had was the search engine and that doesn’t make any money. The moat came when they were allowed to use it to dominate markets in advertising, mobile, email platforms, and so on.



100 million users for ChatGPT plus GPT4 with plugins seems like a pretty good starting moat.


Is it though? It would be a moat for social media or something with network externalities, but Chatbot’s don’t have the strong network externalities that come when the business involves many-to-many user-user interaction patterns, or even 1-1 interaction.

Its a lead, but its not a barrier to competitors acquiring users even if they can offer better value for price outside of network size, nor is it a barrier to competitors development.


They still accumulate a large amount of data for training their future models that their competitors can't access.


That's not a moat. That's their current situation they achieved by being the first mover. But nothing is stopping someone else from doing a cheaper/better model tomorrow, next year, whenever. I'd jump ship in a few hours to a day from their Premium if something better or same-but-cheaper came out.


That’s not a moat, that’s a head start. Unless there is a network effect or good feedback from the users to improve the model, it’s not a moat.


If someone offered the same thing with better privacy for business use, I'd use it.


That's OK. You are mostly only in it quick enough to make your quick buck and exit.


I don't know how this memo keeps coming up as an artefact. It is not written by an actual Executive at Google, who is far more aware of the moats and business at Google.

Anyone more than just a casual observer knows how far from the truth that it is.


Maybe I haven’t drank enough coffee today, but what exactly is the takeaway here?

Unless I’m missing something this article is basically saying a failed startup that couldn’t find product market fit was acquired by a large company for its team and some of its tech.

Maybe I skimmed the article too quick, but this exact movie is one that has been playing for decades.

Edit: Side note, my personal opinion on AI is that companies with existing distribution and existing audiences will be the ones that succeed (e.g. Notion layering AI on top its widely used existing wiki platform). Succeeding by building pure tech with no pre-established audience will be very hard.


> Succeeding by building pure tech with no pre-established audience will be very hard

But that's how "Open"AI and Hugging Face started, didn't they? So, it's possible. Not easy, and probably not single-handedly. I would say it's much harder then selling another game in app store.


Hugging Face is a niche business at best, and OpenAI is pure hype, no real substance. I know you'll find someone saying (maybe even in this thread) "I use ChatGPT daily to make my job 10x easier" but these anecdotes are dubious at best.

Imo, the real winners here will be Nvidia and Apple, which provide software/hardware coupling for these AI features. And most of these are features, not products. Midjourney is a rare example of a real product, but the quality of generative art if you don't include copyrighted art in the training set is pretty bad, so there's a lot of complications there.


> "I use ChatGPT daily to make my job 10x easier" but these anecdotes are dubious at best.

It can do things that I have to google first, but would rather offload. Still don't use it regularly, may be it or something like it one day. We are just at the beginning, it's only a few months since we've got something coding.

> Imo, the real winners here will be Nvidia and Apple,

NVidia already is, just recently it's stock jumped +25%+. I expect it do go further up as there is no real strong competition. When robotics pics up NVidia will be a winner again. They invested a lot in hard- and soft-ware.


OoenAI may be hype but if they got 100M people paying $20 a month for the occasional use of their premium models I would call that a valid business. It does not matter that vicuna or whatever are nearly as good if people are paying for their model and ecosystem.


That's 100M monthly active users, not paying subscribers.


Yes, that is why I say 'if they had'. No way they have that many now but amoungst my associates many are still just starting to purchase the payed subscriptions.


Agreed, but with Bing powered by GPT 4, there isn't that much advantage.


One advantage that I hope a paid version of GPT4 has (like ChatGPT Premium) over a free one (like Bing), is that it is a matter of time before the Bing answers will be polluted by whatever an advertiser is willing to spend its money on. Whereas with ChatGPT I am the paying customer.

I say "I hope", because time and time again it was shown that companies happily take money from both sides and skew the product to the wishes of the one that pays most, i.e. not me.

And since the results of AI are way less transparent than the appearance of an ad, the user will be screwed. If anything, GTP is a master of undetected product placement.

So in the end it will all be in vain, but for now, the paid version looks better than the free.


It is no less transparent than google search in the end.

You are right. This is a war google has lost.

I wonder if we can have a adversarial process that can do better in the AI domain than google can do it in the domain they use for search?

Can we run search through vicuna making it less or more biased?

I think at lower layers, embeddings vs search network they are probably similar.


If you’re paying $20/month then I guarantee you’ll be the one paying more


Not for many specific topics. If Nike pays OpenAI millions of dollars to overweight their products in the model, do you think they are going to say no? Maybe for now during the hyper growth phase, but long term I expect there will be hundreds of companies with large ad budgets paying the leading AI companies to skew output or censor certain results in their favor.


Hugging Face is probably the AI company with the most widespread traction aside from OpenAI and Midjourney. The entire "open LLM" movement is based on Hugging Face.


I think you would be surprised how many people use ChatGPT on a daily basis. Claiming it makes them 10x is probably dubious but the utility is real.


It's absolutely dubious. A few of my colleagues use it. They still get out performed. Actually makes their results worse in some cases...


If Google announced a ChatGPT equivalent embedded in Google Docs, I think ChatGPT would lose a ton of users very quickly. The Microsoft partnership/investment is their saving grace.

I think it’s a bit early to declare either of them to be a long-term (commercial) success.

Edit: See Google Duet: https://workspace.google.com/blog/product-announcements/duet...


> The Microsoft partnership/investment is their saving grace.

Who's saving who, though?

> Last December, Peter Lee, who oversees Microsoft's sprawling research efforts, told Nadella Microsoft's researchers were blown away by the GPT-4's ability to understand conversational language and generate humanlike answers, and they believed it showed sparks of artificial general intelligence.

> Nadella demanding to know how OpenAI had managed to surpass the capabilities of the AI project Microsoft's 1,500-person research team had been working on for decades said, "OpenAI built this with 250 people. Why do we have Microsoft Research at all?"

https://www.theinformation.com/articles/how-microsoft-swallo...


> project Microsoft's 1,500-person research team had been working on for decades said, "OpenAI built this with 250 people

One team was focused on the result while the other on the process and politics. Just my guess. Now Google and Meta are refocusing their teams as well.


Did they just execute really well? That’s astounding


Shridhar was SVP of Google Ads for a very long time. Him quitting to start a Google competitor without ads was ... Optics.

Tho Pragh somehow was in the process of eating him.


I'm not sure but another type of business that may succeed is one that uses AI to provide a product or service directly to consumers through a radically more efficient business process.


Why? AI increases leverage, and makes it easier for entrants to provide value.


Somewhere down the line there will also be a very heavy crash when everyone becomes disillusioned with this hype-driven self-proclaimed "AI" revolution (if we can call it that..). Misleading the investors and regulators and manipulating the market. Silicon Valley being SV once again!


I think this will follow the innovation cycles curve, i.e. the hype will be followed by a crash, which will be followed by widespread adoption and complete transformation of the humanity.

I suspect that once the flashy headlines die off, people will realize that you can't be better than the competition using the same AI tools everybody uses and the idea of replacing humans with the latest flashy AI tool will be replaced with AI tools being the baseline. This will simply result in having the bottom of the barrel in all areas in human productivity to be replaced with the current average in a similar way that industrialisation brought even the worst of the industrial consumer products to the previous average and made those accessible to everyone.

So, even when the hype fades we will end up with much higher bar of expectations. This is going to be a transformation of the society similar to the industrial revolution.


I highly doubt it will lead to anything but more suffering for humanity. And we’ve got to understand that no new tech was invented here, they’re using the same principles and methods that have been used for the past 40-50 years, but just throwing more compute power and feeding more data to it. It’s a hollow promise, a "fake it till you make it” approach, and just another BS marketing, imho

That said, its ability to generate spam and divert income from hard-working people (based on ill conception they can be “replaced”) will probably be unprecedented.


Web3 "influencers" pivoted quickly to AI. What's next?


I think this unnecessarily dismisses 'AI' alongside 'Web3' just because a hype cycle has moved from one to the other.

I wouldn't put them in the same bucket, though.

Web3 had lots of jargon that made it inaccessible for outsiders; it's difficult to point to use cases most people could care about. Blockchain technologies aren't going to be all that useful for only one user.

With AI, it's easier to imagine cases where this can be useful (e.g. coding assistants), which don't suffer the same downsides.


Well, it's not really an "AI" if it doesn't have a truthful model of the world. I can see how people would think it's useful, but they don't see the downsides that come with it, and it's gonna become harder and harder to not be misled.


Ikr? Tomorrow it’ll be the new shiny, with even less value and more hype!


After seeing the news of Neeva shutting down, it was interesting to listen to Sridhar Ramaswamy on the No Priors podcast just ~1 month prior, talking about how the company had found product market fit and their "Aha!" moment, etc, and just generally presenting the kind of incredibly positive image one expects a founder to relentlessly present. But I guess in reality things were extremely different. Always be skeptical when someone is "talking their book." They will always say things are great, regardless of the situation, and sometimes it's actually a clue that things are even worse, as that's why they're on a media tour instead of working. But hey, $150 million isn't the worst way to fail.


Have you found any founders or podcasts that are genuine?


Neeva jumped on the AI train after GPT3. The original plan was a privacy-focused Search. People don't pay for that stuff. I can't believe $millions was invested in that idea.

Personalized search - a search engine for all your content based on the data you've collected and visited and private would be something people might pay for.


My question is whether what the AI startups are selling is fundamentally snake oil or not. Essentially they are all in some way pitching a story of applying what people see in ChatGPT in some way, but usually based on some in-house LLM, custom training or tuning etc, nearly always at vastly smaller scale.

Which all begs the question, how much of what people are seeing is result of the sheer incredible model sizes in ChatGPT - 170b parameters for 3.5 and people are saying it could be 100 trillion in ChatGPT4. If network sizes of that scale are fundamental to achieving the kind of results people are expecting then all of these startups are going to fail.

The most interesting discussion in the post to me is about nVidia. I am curious why, or how long, it will take for what happened with crypto to happen with Transformers - that is, why don't we have custom training hardware yet? Are we just too early? Or is it because the operation is fundamentally so memory intensive that the economics are totally different to crypto, where it's all about the computation and not about storing massive amounts of state?

If it's true that the fundamentals of this are such that custom silicon can't help, nVidia looks to be a huge winner. Who knows which of these AI startups are going to win, but they are all going to buy GPUs from nVidia (or rent them from the cloud). nVidia is going to be the arms dealer in the coming war.


>but usually based on some in-house LLM, custom training or tuning etc, nearly always at vastly smaller scale

Many of them are much simpler than that: merely modifying/enhancing the prompt and managing the back-and-forth with an LLM and then structuring the output. There's lots of use-cases that you can achieve with just that, but easily copyable.


There really isn't a moat with these so called 'AI startups' especially those with VC money and are still unprofitable whilst pumping their valuation by overusing AI buzzwords pretending to challenge Google.

Neeva was a solution in search of a problem and almost no one cared to pay to search for results worse than Google. Their situation was so expensive that it wasn't enough for Neeva to make any money to break even and cover their compute costs.

This is the entire race to zero, where Stability, Apple, Meta are already at the finish line with other open source AI models or on-device inference with consumer hardware already available. O̶p̶e̶n̶AI.com and other hosted AI services cannot compete against open source models or freely available models and that is why O̶p̶e̶n̶AI.com needed to cry to regulators to introduce AI licensing rules that benefit them over actual open source or freely available AI models; i.e regulatory capture.

I can see many of these lesser known 'AI startups' getting acquired or shutdown and the bigger companies in AI actually doing AI research still being around much longer. The big money in AI is unsurprisingly hardware and not the software. [0]

[0] https://news.ycombinator.com/item?id=35581777


Even if Neeva was free it still was compelling enough to unseat such a dominant incumbent. Especially since the search result quality is not obviously different to the consumer.


Its quite likely that the AI hype (or more precisely the LLM hype) will deflate like the crypto hype even if for a different set of reasons.

People have been conditioned on expecting the next big thing, FOMO on trillion dollar companies, winner-takes-all etc. as if that is the new normal.

Its quite unlikely that things will play out this way again. The iphone moment or the google moment or the facebook moment will not keep giving "tech" moments.

Tech has both arrived and also bitten more than it can chew. Take for example the rise and fall of crypto. It may have degenerated into inane speculation but not before raising fear deep in the core of the financial system. This is was shutdown libra and prompted cbdc discussions.

Technically we have definetely turned a page in the last decade: algorithms and network protocols keep evolving.

But we have not found a new social equilibrium on how to adopt and deploy new business models. Privacy, sovereignty, power and control, regulation are now center stage.

Tech startups changing the world (for good or bad) does not look like the mode for this decade.

Established entities selling shovels will milk the trend for a while but it feels that the next decade will be decidedly different (mostly regulated corporate adoption rather than startupy).


> People have been conditioned on expecting the next big thing, FOMO on trillion dollar companies, winner-takes-all etc. as if that is the new normal.

People have been FOMO investing since the start of investment opportunities. Tulip Mania was a famous bubble in the 1600s. Even when people mostly predict the future correctly, they can be too early like with the dot com bubble (the internet seems to mostly have worked out and arguably beyond the expectations of most of the dot com speculators). AI could very well bring major improvements to productivity at some point, but if it happens, will it be while current speculators can stay solvent or will they be too early in the same way dot com speculators were?


> will it be while current speculators can stay solvent or will they be too early in the same way dot com speculators were?

I wouldn't want to hazard a guess.

Speculative tendency is a given as an underlying driver, though it can be modulated by the structure of financial instruments, markets, interest rates, taxation etc.

What is not a given is that FOMO will be continuously expressed every few years just so that a number of savvy intermediaries can keep banking on that collective defect.

Manias are also generational phenomena - people do get burned out. I do have a feeling that currently we have collectively "burned-out" from the promised tech disruption.

I may well be wrong and a powerful idea that is not subject to all the above caveats is now incubating somewhere in the long tail.

An interesting question is whether we will see it here on HN.


This is the first I’m hearing about Neeva shuttering it’s search product and I’m a paid subscriber………

Ironically the Neeva AI features were the thing that made me stop using it after I’d been using it for about 3 months.

Oh well, further proof you can trust businesses to be anything except businesses if they haven’t proven themselves yet. Promises are empty until delivered.


> But I do, however, know of a very large number of standing open offers among tech executives that are more than ready to try out a different kind of hardware than what Nvidia offers if a truly competitive one ever actually materializes.

I’ve read this sentence four times and I can’t figure out what it is saying. Is it missing a word or something, or am I missing a piece of my brain?

More generally, I guess I’m not the target audience for the upcoming paid version of this newsletter, because I can’t extract much meaning or particular insight from anything I’ve read here. Though it did make me feel a little foolish for not tossing a little money toward NVDA, say, six months ago when it should have been pretty darn obvious to me that they’re the ones selling the shovels in this particular gold rush.


Translation from gibberish to english for you:

> I am aware that there exist a large number of tech executives who would like to try out non-Nvidia hardware, if anyone makes non-Nvidia hardware that is truly competitive.


LOL. Thank you. Guess that’s why I’m not making the big bucks.


Neeva was TBTF. Had nothing to do with AI.


Too much capacity and not enough profit. Anyone who puts money into AI startups is going to find it very hard to compete with Open AI and Google. But reality has never stopped a gold rush before and it wont this time round.


tl;dr: nothing we didn't know. Since the beginning of times, startups with lots of funding have failed for a number of reasons. AI is no different in that regard.


Where we're going we don't need acquisitions.


If only there were something that might help us understand what we were getting into.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: