Hacker News new | past | comments | ask | show | jobs | submit login
AI startups raised $6.9B in Q1 2020 (angel.co)
67 points by noelceta on April 15, 2020 | hide | past | favorite | 33 comments



1. Many startups claiming AI are simply doing things with data; research shows two-fifths have no AI programs in their products [1].

2. Round-closing announcements trail the actual closing by several months. Not to mention the actual raising and diligence takes months (even more so for larger/later rounds). Thus the $ reported here does not factor in covid19 and the current economic crisis, as implied.

[1] https://www.ft.com/content/21b19010-3e9f-11e9-b896-fe36ec32a...


Both of these points are correct, but I'd point out one nuance.

The first point gets used a lot to support a narrative that all a startup needs to do is say "ML" in their pitch deck and secure funding. The reality is, almost all investors are aware of the prevalence of AI snake oil, and the good ones are pretty sophisticated when it comes to vetting it.

There is an also an attitude, related to this point, that startups who use ML in ways that aren't "new"—as in, finetuning a state of the art model, using well-known techniques, or otherwise "simply doing things with data"—aren't doing something worthwhile. I actually think the opposite is true. These startups represent the most exciting change in ML, in my opinion, in a very long time: They're actually building things with it.

We have a tendency to judge all ML announcements relative to our most extreme projections (autonomous vehicles, AGI, etc.) In that sense, yeah, nothing measures up right now. But lost in the back forth over the hype is the fact that there are a ton of companies building really cool, valuable products that couldn't exist without ML. Recommendation engines, speech-to-text, real-time prediction services (think Uber's ETA prediction), image analyzers, etc. Many of these are built by startups who aren't doing anything fundamentally new on the data science side, but I'm fine with that. Most SaaS companies aren't pushing basic technical boundaries either, and we find them pretty valuable.


I agree completely. Doing novel things with ML is very, very, very hard, and (afaict) hasn’t been particularly successful at the startup level. But operatonalizing existing techniques is _very_ promising, and where I think a lot of the wins will come from.


I've spent an unhealthy amount of time at presentations, conferences, seminars and meetups. With that, on your first point, I'm more inclined to believe somewhere around 90% have absolutely no "AI" or "ML" for that matter in their products. And a large fraction of those who do have re-used existing open source solutions without digging a lot into them. De omnibus dubitandum est, of course, but this is the impression I get.


Your suspicion is largely correct, in my experience. I've worked as a consultant on a number of different AI / ML projects for start-ups. Most aren't doing anything all that new or groundbreaking from an ML point of view. Their real innovation is usually more about applying ML to industries / areas where it hasn't been used much before. But that doesn't get the investor dollars flowing in, so the founders try to make it seem like they have some radical new ML breakthrough. And in recent times, it has worked.


I've been working in this space for a few years, as an engineer and now founder, and I think a lot of savvy investors are beyond this.

In my experience, everyone who is informed acknowledges that ML is very powerful, but the algorithms are widely accessible.

Instead, it seems that investors are looking for a company that protects itself with a proprietary source of data that allows for results that are unobtainable by competitors. Also, to a lesser extent, domain expertise that allows them to tailor existing ML architectures specifically for the problem at hand.


Yep, just grab any random ML paper off the top of arxiv and have your product development team replicate it. Even if it doesn't end up working properly at all and you have to shelve it for a more common solution, you can still claim the 'revolutionary' technique is in R&D at your company. Most investors can't tell the difference.


> And in recent times, it has worked.

This is what __REALLY__ bugs me. Personally I'd love to see money poured into actual R&D rather than people abusing the "ML" and "AI" acronyms. Investors don't care about your R&D at all and commonly see it as a huge risk factor. Which it is of course. A semi-working prototype has a much better chance of succeeding so they stick to that.

But as a consequence many people(myself included) are not even bothering with pitching anything to anyone and invest their own money, time, resources and savings into it. Blocking? Yes. Painful? Absolutely. Slow? Incredibly. I'm sure the next AI winter is around the corner, if it isn't here already: The virus outbreak might be the catalyst that triggers(or has triggered) it, given the staggering amount of people going full "I have AI which will provide a cure, vaccine and time travel to go back in time and warn the world, just gimme cash". I doubt anyone would deliver on any of those promises(and that's me being optimistic). But the crisis will likely push a lot of investors to pour millions into the empty promises that have a few buzzwords thrown in. I hope I'm wrong.


I share your frustration. If you're trying to be realistic and straight-forward about the limitations of AI/ML, you're getting little interest from investors. Here's an anecdote. I was at an "AI in drug discovery" conference 2 years ago. One of the presenters, the founder of a drug discovery start-up, emphatically made the claim in a talk that "we should deliberately overhype AI in drug discovery, to raise awareness of what we as an industry can accomplish". I was gobsmacked by this. And yet, in the social mixer afterwards, the investors I spoke with LIKED this approach- they said the boastful founder was bold and visionary, and they didn't care that he was exaggerating the capabilities of his company. So, that's why all we hear is hype- founders are just responding to investor incentives.

That's also why I work as a freelance consultant and not as a founder. I think that autonomous driving (rather, the lack of such) is going to be what triggers the next AI winter. Too much money and hype, too little results for too long- the rope is wearing quite thin from what I can see.


I think the autonomous driving, while a factor, will not be as much of a kick as covid-19 cure and/or vaccine. I mean all the best to everyone trying and I hope someone proves me wrong about this by actually discovering a drug/vaccine through AI. But as far as autonomous driving is concerned, at least there are some visible results. Not as advertised and(to my mind at least) but most importantly not that valuable but at least there's SOMETHING on the table....


> And a large fraction of those who do have re-used existing open source solutions without digging a lot into them.

This is a good thing. Coming up with new ideas is for researchers. Turning them into a product is for entrepreneurs and engineers. One of the reasons I see so much promise in ai startups is because the research has matured to a point where anyone can take it off the shelf and apply it to their domain. Similar to the web, there was a few early companies that did great things with novel engineering, but most of the value generated was CRUD apps built on top of frameworks like Ruby on rails.


Yup. I once worked for a startup that claimed to be doing DeepLearning, AI and other hyped stuff. The "machine learning" guy, once he allowed us to look at his code (took two years to convince him to do that), was basically doing point and click with Matlab dialog wizards. It was some horribly contrived linear regression that took about a month(!) to train on a couple of millions of data points. But he had a PhD (non-CS) and he could bullshit the clients and management for hours... When he talked about ML models he called it "AIs" like: "So we have a couple of AIs in production that generate predictions based on blah, blah..."


I was really shocked to learn how common this is.

Credentialism and lack of integrity are rampant throughout the economy. It's soul crushing to see it.

An example that springs to mind is (I believe a YC alum) [1]. CEO is quoted as saying, "The more data we have on what happens over the next few days is going to really accelerate our ability to retrain our AI systems and then help all of you accurately understand tenant risk moving forward into this new world."

A serious modeller would be able to forecast what happens over the next few days. That the company wasn't able to do so strongly suggests to me that the "AI system" didn't have macro factors to shock and that scenario analysis was not part of the system. To a trained eye, that can be a red flag for sloppy modelling (and therefore hype).

[1] https://www.qpbriefing.com/2020/04/06/unconscionable-landlor...


as far as i can tell the most egregious industry is the insurance (specifically property related). every single company claims to be doing AI and it's all complete bullshit. the term has no meaning anymore.


https://a16z.com/2020/02/16/the-new-business-of-ai-and-how-i... a very good take.

Most AI startups are struggling to scale because models are hard to scale and generalize compared to SaaS. There'll be some fortunes made for sure, but we're still very early in building out these businesses and mostly these seem to be VC's who are (were?) desperate for "deep-tech" portfolio items to make them sound "bleeding-edge" at CogX, Davos, that thing in Aspen, etc.


Seconded on this article. I shout from the rooftops to anyone who will listen that for ML, "the confusion matrix is the product," which summarizes much of what they say. The only thing I add to that adage now is that a meaningful product is a weighted function of Relationships, Data, and Expertise.

Most ML companies have one, funded ones have two, and profitable ones have the right balance of three. Funding an ML company that isn't 85th percentile on at least two is dumb.


The key bit of information from this (https://venturebeat.com/2020/04/14/ai-startups-raised-6-9-bi...) is that "capital flows disproportionally into large late-state deals." I would read this as a lagging indicator of how "hot" the field is. Most of the companies have found their "niche" and identified the actual problems that need to be solved; like other commentators have said, these problems may only be tangentially related to ML models themselves. These companies are currently scaling up and locking themselves into the market. Given how long the sales cycle is and how sticky it may be in enterprise, this means now is not a great time to start a company just doing "AI".


I expected to see a list of the startup names and how much they raised.


What's the value of this link over the original one?

Original: https://venturebeat.com/2020/04/14/ai-startups-raised-6-9-bi...


What percentage of these "AI" startups do you think are doing no more than traditional automation with technology? Applying deep learning to do no more than what can be done with more accessible technology is like using Haskell on steroids, ensuring that no more than a few people will ever be able to maintain it.


Pretty sure it's mainly platforms. https://hbr.org/2020/03/navigating-the-new-landscape-of-ai-p...

I've personally been seeing several pop up for helping with training+deployment for edge-oriented applications like Edge Impulse and Latent AI.


Latent Sciences?? https://latentsci.com/


Were you referring to when I brought up Latent AI? If so, this is them: https://latentai.com/


This is an insane amount of funding when you look at the projected revenue for machine learning over the next couple years.

Maybe those projections are wrong, and I certainly love all the energy in this space, but it feels like a lot of investors are going to get burned.


Almost none of the company leaders or even VCs fully understand what AI even is or does. They just like to hear thats its there.

If you don't have some AI in your company, you won't get investors.


Is it really that bad? I thought investors care more about business model and growth potential.


It isn't THAT bad. Investors definitely care more about model and growth potential. But you will certainly run into those who ask "How are you integrating AI into your platform?" without any real idea about what that means.


Incentives are really messed up here. Tacking the letters "AI" on the end of your company name will increase it's multiple. It will force your engineers to call their rules engine "AI" when on calls with customers. It's really unfortunate.


Is there a break down of percentage allocated from seed, series A, B, ... all the way down to F rounds?


I was hoping the bubble had burst by now, but it sounds like it might not be there yet.


"Anonymous Indians"


Was not trying to be racist, it's just a Twitter term for fake AI startups (started by one Indian man).


Wrong thread?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: