Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI Announces Funding for Startups (openai.com)
204 points by cl42 on May 26, 2021 | hide | past | favorite | 113 comments



I am a fan of Open AI, but is this not an admission that they raised more capital than they know how to deploy on research?

Similarly, Peter Thiel once made a case that if Google ever paid a dividend it would be an admission that they are no longer a technology company and are instead a bet again innovation in search. [0]

[0] https://youtu.be/2Q26XIKtwXQ


No. OpenAI would not invest their operating capital in startups - that would be legally and fiscally irresponsible. Instead, a separate entity is created to manage the fund and OpenAI controls that entity. That fund then raises money from LPs specifically for investments. I would imagine some of the profits/carry will then flow back into the parent. What this ends up being though is mostly a branding exercise to boost OpenAI's developer ecosystem, with some upside for the company itself if they are able to fund some successful startups.


Keep in mind that the former president of YCombinator is the CEO of OpenAI. That might help explain a move like this. Also recently a lot of people left Open AI at the same time. There was some speculation that they are going to start a new startup. Maybe this is a way for OpenAI to be able to invest in them.


Or the OpenAI venture fund is an acknowledgement that success requires an ecosystem that extends beyond their front door. Funding startups is a great way to signal that OpenAI thinks their stuff is so compelling that it will lead to startup-worthy ROI. It is also a way to get a track on potential acquisitions.

The alternative is a pure NIH strategy, or build something and hope for partners, both of which are flawed.

And regarding the Thiel snippet, dividends are isomorphic to stock buybacks, which Google has been doing for a while. They seem to still be a tech company, at least as far as my Ex-Googler eyes can judge.


Didn't apple do exactly this with maybe sequoia around it's iTunes store to encourage innovation?

Also, dividends generally can't be stopped. The expectation is that you continue and eventually increase them. Share buy backs aren't.


I would take a more charitable view on their move here.

AGI research is risky from a R&D standpoint (for obvious reasons), and also tricky from a business strategy and product development standpoint. There isn't a mature business playbook of how to monetize this technology, and although they probably have some ideas, their GPT-3 API pilots have suggested that outside entrepreneurs and programmers can come up with a larger search space of potential use cases then OpenAI can come up with themselves (in the same way that AWS users create more diverse products than Amazon can envision).

It's not an admission that they can't deploy capital - it's that they see an untapped resource of creativity that they can cheaply profit from, rather than building things in-house. The would rather grow the whole pie than try to grow their own slice. It's in a similar vein to how Taiwan Semiconductor lets other companies build on top of their platform and foster trust by never competing with them. In turn, they get to partner with more companies.

If they had pivoted entirely to a 100% investment firm, I would agree with you. But it looks to me from this announcement that they have built some fundamental technology and would like others to figure out the best way to monetize it, and they want to focus on new fundamental technology. An investment fund will align incentives with entrepreneurs building on top of GPT-3.


Then why can't we get access to their API?


Given the format of the application and the sectors they’re targeting for investment (companies that would benefit from applied AI, not other pure AI plays), this reads to me more like “there need to be more customers and demonstrated use cases for the tools we built, and fast” ... “and they should also be built on Azure infra”


I think its an admission that the near term benefits are from applications of existing tech, not AGI, and they have hired for folks who want to do research, not people who want to do the grueling work of making a startup.

This is fine though if OpenAI actually has an edge. I don't think they have a meaningful tech edge, you can see folks like GPT-Neo fully reimplement the ideas in papers. I also think they see this since they're looking to make a small amount of investments, and the big benefits of things like the GPT-3 API are for low experience/funding projects.

But maybe they have an edge in evaluating AI startups, and an edge in advising them beyond the typical VC. And if they do have a good repeatable edge there, this actually seems like a pretty interesting way to fund a research lab.


I don't think it has to be - I think they're thinking about how to turn that research into money in a healthy way, so as to keep doing relevant work in a super capital-intensive area and maintain their ability to exert some control over how AI is used, and they quite rightly assume that they can't both do the research and invent all of the killer apps at the same time.

It's interesting if you compare it to Bell or Xerox, right? The research lab's hackers helping make funding decisions about startups that use their secret sauce might seem funny historically, though maybe it would have been more effective.

(In practice, though, as HN is well aware, OpenAI isn't just some research lab; serial entrepreneurs who are connected with the most successful incubator ever, etc).


Similarly, Peter Thiel once made a case that if Google ever paid a dividend it would be an admission that they are no longer a technology company and are instead a bet again innovation in search.

That seems silly to me. Google is not paying dividends right now, but they are doing stock buybacks which is basically the same. However, Google also has 100k+ employees there are only so many people who can work on innovation in search at the same time.


From the introduction: "The fund is managed by OpenAI, with investment from Microsoft and other OpenAI partners."

It's a separate fund.

> Peter Thiel once made a case that if Google ever paid a dividend it would be an admission that they are no longer a technology company

Well.. duh? They might be a tech company and pay dividends, but they aren't a growth company when that happens. This is sharemarkets 101.


Good point. I suppose it depends on how much of the capital in the fund is committed from their reserves, if any.


To make things lucrative, one thing that could be done is to remove the condition that OpenAI APIs must be used. But - the people who are going to pitch to OpenAI also are opening themselves up for competition. Another option would be to spin out an OpenAI Venture firm without strings attached?


> Similarly, Peter Thiel once made a case that if Google ever paid a dividend it would be an admission that they are no longer a technology company and are instead a bet again innovation in search. [0]

Does that mean it makes this a bet against innovation in AI?


That was not what I meant to imply.

He is saying that Google not investing their capital reserves in R&D implies that they have run out of good research ideas.

If you haven’t watched the clip I recommend it.


I had watched it. I'm just trying to understand what parallels you were trying to make to the Open AI story by quoting Thiel. I think he was saying Google should be broken up because as you say they have run out of good ideas and are just using their capital to keep new competition in search at bay. So by quoting Thiel wouldn't you be implying if Open AI has also run out of good ideas then they're also trying to keep competition in AI at bay?


A company that never issues dividends (or equivalents) has a net present value of 0.


That completely ignores acquisition as an exit strategy (it even works for public companies).


What about share buybacks?


Share buybacks are dividend equivalents.


seems like the big pile of money they sit on is proof of that.


A ton of negativity in the comments here. Greater availability of funding is awesome. I'm working on a project right now as a spinoff of a research contract that I'm going to land and apply with. Very exciting and very timely.


OpenAI doesn’t have much goodwill in the community, I would venture to say. Many people here have been disappointed by the delta between the open vision first described and the closed reality now, and also by the fact that applying for API access just leads to silence.


It's just the old VC pattern of making a big pile of money and using it to outcompete everybody else, as opposed to creating an environment where everybody can thrive.


Not true, just got access myself today. Applied a few months ago and super stoked to stay up late and play around tonight. Don't know anyone at OpenAI and didn't pay off any Microsoft officials or anything. Just have patience.


>in the community

Which community?

There is an irreconcilable conflict between AI safety and AI openness. If you create a dangerous program, and you know it's dangerous, then it would be insane to release it.

This was widely pointed out when OpenAI was announced. They would need to pick one or the other, and they've picked safety.

I think it was the correct decision, though it does make their name sound rather stupid now.


>There is an irreconcilable conflict between AI safety and AI openness. If you create a dangerous program, and you know it's dangerous, then it would be insane to release it.

>This was widely pointed out when OpenAI was announced. They would need to pick one or the other, and they've picked safety.

It is a bit strange that it just happened to correlate with huge amounts of money for everyone involved around the same time, if it was really just straightening out their philosophy around openness. It transitioned from non-profit to for-profit with a non-profit parent or something at the same time.


> There is an irreconcilable conflict between AI safety and AI openness. If you create a dangerous program, and you know it's dangerous, then it would be insane to release it.

If you remember OpenAI's creation, the whole idea was that AI safety comes from democratizing AI. Their idea of AI safety was AI openness.

It's like how some would describe the Second Amendment in the US — by democratizing these dangerous weapons, there may be more chaos but people will be safer from some overlord who holds all the dangerous weapons.

This isn't to say that I agree, but what you're suggesting as their mission is in fact antithetical to what they claimed to be their mission.


This argument doesn't hold water. Models like DALL-E, that can create cartoons using word phrases are not open sourced. That's probably not because of "AI safety".


It can probably create more than cartoons, and even with cartoons there are plenty of very offensive and dangerous terms you could use to create dangerous and offensive cartoons. I mean, I would love to be able to type in anything I want into DALL-E, but the time from release to it sparking some kind of geopolitical incident could probably be measured in hours.


So the fact that somewhere on earth there might exist some fanatical people who might be offended by something, becomes sufficient reason to shut down access to a cartoon generating capability?

Seems like you are setting a pretty low bar for what we will allow with AI on one end, and what will trigger an AI feature’s general availability to be cancelled, on the other end. Everything else in between these two is going to be even harder.

To make my point more clear, imagine a magic quadrant with two axes. On one axis it goes from capability - harsh to mild. Harsh would be like "it can physically burn and kill everything to ashes" and mild would be "it can temporarily distract someone."

The other axis would be who gets affected. That axis goes from "every living being in the known universe" on one end, to "nobody" on the other end, with "a very small handful of fanatics with extreme outlier beliefs" somewhere out there toward the "nobody" end.

In the graph, you are setting the bar way over on the "just a slight distraction" side, and way over toward the edge of the "nobody" side, and saying that this is sufficient reason to cancel AI access for the general public.


> but the time from release to it sparking some kind of geopolitical incident could probably be measured in hours.

This sounds like a huge exaggeration. At the resolution of DALL-E, anyone can photoshop or draw something of the same quality.

Also, this argument was unsuccessfully used when OpenAI claimed that releasing GPT-2 was going to cause massive societal strife. They released it later, and life continues on.


Anyone can draw an offensive cartoon. They are withholding the solution for money. If a startup that is removing backgrounds from photos is worth close to $100M (remove.bg) imagine how much this is worth. This technology is worth billions and could replace illustrators in many cases in web design, content, etc.


What? You can do all of that with Photoshop or even a meme generator faster. Why restrict DALL-E for fear of unPC uses?


If you create a dangerous program, and you know it's dangerous, then it would be insane to release it.

Oh please. "Only we can control this terrible monster we have created."

Marketing guff.


The fund is no charity or like a typical VC investment, it clearly says they want to see your project leveraging their API, in other words you will be paying them back by using their API's. It's almost like how cloud vendors give free credits, no way it will help research or open source community.


> If your startup plans to push the boundaries of today’s artificial intelligence by building with our API, we want to hear from you.

How about just giving me access to your API so I can start tinkering with it, and who knows, maybe it becomes a startup that you can fund?


I got my beta invite last month. I messed around with it for about 30-45 minutes, only to discover that I had already blown through half of my allocated “trial credits”. This really killed my motivation to keep tinkering with the product.


I had exactly the same experience, and any hope of developing a business case went out the window when I realised I'd need a business case to justify the expense of developing a business case. In a desperate attempt to salvage the possibility of using the AI in any fashion I tried prompting GPT-3 to write the business case for itself, feeding it excerpts of several successful such documents of my own to set the tone and structure, at which point I ran out of credits.

You can try doing the same on the cheap with AI Dungeon, but there's a fair chance it'll be overrun by vampires and mad scientists before the IRR-to-WACC ratio estimation section, rendering any such document fit only for mopping up after goblins. You feel a sudden pain in your chest.


It feels like the second paragraph was written by Open-AI... Was it?


Now every AI article has a comment from someone being suspicious that a comment was generated.

[This comment, like every other comment on HN, was generated by GPT-3. You're the only human here.]


Well, okay... but the reason I'm suspicious is that the last sentence ends abruptly. It calls for a follow-up that never comes.


It was mimicking GPT3's writing style in AI Dungeon



Oh no yea they have to keep it closed to keep terrorists from generating illicit mad-libs with their AGI


They are highly paternal and even evangelical about it, with very sensitive warnings about how the output is "harmful" and we'll make it "safe". Avert your eyes children! We will protect you and ensure textual purity in accordance with the church.

George Carlin is rolling in his grave.


People frequently cause harm by using language. Being cautious about something that could potentially generate orders of magnitude more targeted harmful language seems reasonable.

In general, when people working full-time on a technology think it's dangerous and you don't see why, it's best to assume that they've spent a lot more time thinking of ways it could go wrong than you.


> In general, when people working full-time on a technology think it's dangerous and you don't see why, it's best to assume that they've spent a lot more time thinking of ways it could go wrong than you.

Or they could explain it to us in a way that's understandable. I see no reason to give them the benefit of the doubt when they've already thrown away so much goodwill.


Sometimes this works. It was pretty easy to explain what can go wrong with nuclear weapons so the public can understand.

Sometimes it doesn't. People have, for decades, expressed concerns about the risks of gain-of-function research on viruses, and approximately 0% of the public understood it until last year.

Many people think Facebook is harmful, and a few people predicted so 15 years ago. Facebook went right ahead and did it anyway, so we know that the warnings were valid. Would you have argued with "Facebook will become harmful" people 15 years ago, demanding they explain exactly what the harms will be? If so, you'd end up on the wrong side of history.


Sorry to reply so late to this.

Your points are entirely valid, but in this instance OpenAI - to the best of my knowledge - hasn't actually stated what any of the potential negative consequences are. I realize it's possible that they truly have uncovered some magic mystery, but why would they be the only ones to realize it when others are working on similar problems? And more importantly, why can't anyone articulate what these dangers are? I have heard a few scare-mongering arguments which sound like they were written for clickbait, but nothing substantial.

I realize you used to work/collaborate there so I'm sure you were exposed to these ideas and I respect your view, but my frustration is you still aren't stating what the bad consequences are. Is it really so nefarious that even mentioning it triggers some sort of scourge upon humanity? I just don't buy it, especially considering the managerial history (looking from the outside in) of OpenAI.


That's like deferring to people who work in the tobacco industry about what's dangerous or not with cigarettes.

Also, many of them are still under the impression they're making the world a better place at Facebook and Twitter. So no, let's not pretend technologists know what's best. And they don't understand language and society better than, say, George Carlin. They only think they do.


> That's like deferring to people who work in the tobacco industry about what's dangerous or not with cigarettes

It's not a symmetric bias. If someone selling you tobacco or ad spam tells you it's safe, one could reasonably be skeptical. If that same person voices specific concerns, it's more notable for coming from them.


Except they aren't specific concerns, it's just a generic "it's dangerous, we need to control it". Considering that they said the same about GPT-2 (and it's release ended up doing ... nothing), I think there's good reason to be suspicious of bias, because OpenAI being the gatekeeper is profitable for them.


Tobacco isn't the best example.

Imagine a company that gatekeeps a language feature, and they are staffed with Creationists or Scientologists. Is it notable if they define certain output as dangerous? No. Should you be skeptical? Yes. It's the same if they are staffed with Wokeists. In both cases they are defining what's "dangerous" according to their religion/ideology.


There may be some actual legitimate reasons to want to withhold AI from the public.

A company named OpenAI, though? So stupid. They need to change their name already.


They're about as open as Google is not evil.


+1.


Not exactly open in many other ways too.


Also geo-blocked in some countries


What, you want people to host services for you for free?


I was going to say "for a nonprofit all about making AI more open, maybe!" but then...

https://techcrunch.com/2019/03/11/openai-shifts-from-nonprof...


I think the point is, what exactly is "open" about OpenAI? Can you download their trained model? Can you even access their API without risk of your access getting cut off? What is open about it?


They are open to get nice PR at any opportunity, like DeepMind.


The OpenAI API/GPT-3 is still invite-only.


Is there any timeline on the horizon for normies who don't know anyone important can get a token?


As far as I'm aware, there is no intention to ever open it up. You're better off waiting for GPT-NeoX.


Not even for people to pay for using it?


I got the beta and didn’t do anything special, just applied relatively early. I’d just be patient, their community Slack is constantly growing.


Is OpenAI profitable (yet)?

I thought their game plan originally was to raise a TON of money in order to not be (very) financially burdened while doing the long-term R&D necessary to build what would (hopefully) become AI models so advanced they could be traditionally monetized and level out the huge company debts.

Funding startups (see: taking risky bets on time _and_ money) doesn't seem to fit into that model... unless:

1. They've built a financially-viable product [*] and have the spare time and money to start paying for adoption/growth.

2. They're adding more risk to their debts now to bet on a much bigger payoff from startups using their tech later [*].

So I guess my question is... why are they doing this?

[*] I use both GPT-2 and GPT-3 almost daily and don't have a masterful understanding of either, but they both do fall short of 90% of their hype/marketing. They're amazing jumps forward, but... nobody built lasting businesses on Markov chains when they were new, either. I wouldn't want to build a business propped up solely on either of them yet.


The page says that OpenAI only manages the fund. The money comes from Microsoft and “others”.

So if you’re OpenAI, there’s not much financial cost. What you get in return is companies using your service for a wide range of applications. You get all of their data streaming into your system, with constant feedback for you to iterate your models.

It makes a lot of sense for them.

For businesses, though, welcome to training Microsoft and whoever those “others” are. OpenAI took a huge amount of funding in itself from MSFT. So a company is helping OpenAI build models to replicate what they do, giving MSFT access to those models, along with whoever the others are, and letting them wait until the experience is excellent before jumping in.

Sure, they only have access to the data and models. Except, oops, they also _invested_ in your company so they have access to your financials as well. They get regular status updates on whether what you’re doing makes sense.

But if OpenAI is the best out there, even if training their models could eventually kill you, what choice do you have? Develop your own AI to compete against their head start and billions? You won’t win.


From how I read it, it’s not as if they’re spending their own money, but rather starting a new fund with other people’s money. It says specifically that OpenAI is managing the fund, and others are bringing money on the table.

Whether that distracts them from their core mission is up to discussion, but I don’t think they’re increasing their debts or anything like that.


He says people are using GPT-3 to answer customer questions in chats.. how is that possible? How do you prevent it from acting as an agent and committing you to something you can't commit to, introducing legal liability? Do you just have some small legal disclaimer that everything it says should be treated as entertainment only?


Not entirely related but we've used the GPT-3 to augment our resume software and the results have been useful to a huge amount of job seekers. Perhaps we'll take a stab at this


https://www.sudowrite.com/ is one of my favorite new tools and it's also powered by GPT-3 I think. Super interesting what they are doing.


Such a cool concept. Here is our less creative way of using gpt-3 https://www.rezi.ai/rezi-ai-cover-letter-writer


OpenAI is the corporate equivalent of a social media influencer. Rather than sponsored product promotion alongside a seductive lifestyle, they offer Azure product placement alongside trendy AI research and, now, futuristic startups.


With the amount of money MSFT has invested in OpenAI ($1b+), is it fair that say that OpenAI risks being shadow acquired by MSFT at any moment ?


MS's parallel DeepSpeed ZeRO effort is also pretty striking considering their investment in OA.


> "Developers are using GPT-3 to create realistic dialogue, summarize complex documents, answer customer services questions, and make search better than ever before"

This is OpenAI desperately admitting: "nobody needs this, we'll pay you to help us find a market and a vision".

What do we do with all of these fancy, expensive models? Nobody knows really.

We don't really have a clear use for any of it, beyond the most rudimentary tricks. We put all of our focus into building a fancy solution, hoping we'd figure the problem out later. We're finally starting to realize none of this has a practical use.


I have to agree. Really reminds me of IBM Watson's tech for revenue share arrangement. If any of it works well, why not build the business?


> We're finally starting to realize none of this has a practical use.

Even a decent-quality chatbot could be applied all over the place. For example, video games, education, sex toys, auto-generated narratives for books/movies/games/comics/etc., sales/spam bots, tech support, enhanced searches, automated assistants, etc..


> "Even a decent-quality chatbot could be applied all over the place"

When shit hits the fan - you want a human being on the other side of the line to help you through whatever you need help with.

Call centers can be expensive - but the chat bots we see today are truly useless, if not harmful even. We call them "bots" - but a lot of them could've been implemented like the dialogue mechanics in an RPG game (a state machine of predefined questions, and predefined answers). There's nothing "smart" about it - it's the same tech we've had for the last 30 years, rebranded.

These things just don't work. GPT-3 isn't good enough - and honestly, it's not even needed.


I get the sense that you're focusing on when stuff like GPT-3 isn't useful. For example, yes, a customer with a complex issue to a call-center may need to speak to an expert that understands stuff beyond the level of a decent-quality chatbot.

But a lot of humans who work at call-centers are already chatbots. And not even decent-quality ones! They're basically reading from scripts, and when they go off-script, it's often unreliable guesswork. A decent-quality chatbot could be a pretty significant improvement in many positions currently held by humans.

Of course, you're right that GPT-3 isn't any sort of ultimate technological end-goal; that'd seem to go without saying. Still, it doesn't need to be some sort of uber-tech to have its uses. I mean, even a slightly smarter Google-like search box could be a helpful near-term technology.


I think he wants us to feed the AI our business plans? I wonder what they intend to do with that? Pattern match investments, or something more?


I for one welcome this announcement, take that enormous "AI" money and fund people who have an actual product. Brilliant.


OpenAI wants to be YC.


Altman openly says, if you want to make money, you have to invest in startups (i.e. piggyback on the smarts and dedication of others). He wants to make money on whatever is hot, so there you have it.

It's just sad, that they needed to deceive the audience with a BS feel-good story and a super awkward name to begin with. In my book that's basically a business no-go, period.

What's worse, the field is a total minefield. Jobs will get lost, surveillance and feedback loops will get tighter - all for the sake of making X billions from Y billions.

I could also see Altman replaying his early "success", something like taking 30M for BS idea and then selling it for 30.5M and celebrate big.


The cynicism on this thread is startlingly bad.

OpenAI is one of the top 5 AI research groups in the world. Getting early access to their systems is an amazing opportunity.

Of course they should fund AI startups: $100M isn't much for Microsoft and others to put up as an investment in a Sam Altman-led fund, and innovation is what happens elsewhere.


This is a great company, doing great things but the cynicism towards "Open" AI is of their own making; the original vision that many here supported seemingly left by the wayside as soon as the money started rolling in.

There would be less "cynicism" if they had either stuck to the original open plan (or communicated the lack of openness intended better from the get-go); or at the very least were honest about their seeming U Turn and changed the name to reflect the new position - maybe branching the actual open stuff off into a separate and clearly demarcated organisation.


I think it's awesome that Sam and Co. are doing this. I can imagine variations of GPT predicting bio-tech, medicine, human relationships, environment, politics and more. Many of the companies tapping into GPT3 are just scratching the surface. I love that they are funding some big bets.


"We're here to make AI Safe" was the introduction.

That's some interesting language choice.


Objection, Your Honor. Assumes facts not in evidence: namely that AI exists.


This funding does not appear to be explicitly GPT-3 related (any type of AI is accepted), but the video/application hints very heavily toward favoring applications using it.


what the hell is OpenAI anymore.


Is there profitable business based on GPT-3?


Maybe AI Dungeon? That's the only one I could think of


What goes around comes around?


Seems like the CEO is a one-trick pony, making everything he touches a startup incubator. How does this fulfill OpenAI's mission of bringing about AGI?


Investing in startups that use AI in their course of business would be one way to expand the surface area of innovation that would lead do that end.


I'd say money is cheap right now and fund managers would rather have people with information asymmetry invest their money, which OpenAI does via their API. I think it's a natural fit to invest in companies that use their API. This isn't a new idea, Stripe and Slack both do this.


YCombinator is a tech company whose business model is investing. Seems like OpenAI could be the same.


How is that different from a VC that invests in tech startups? What, exactly, is YC’s tech?


> they wonder, before clicking the reply button on a popular piece of YC tech


Nobody uses HN because it's an amazing piece of software.


Yeah we do. HN made a lot of intentional choices to avoid becoming a generic social media clone. There's a reason why nobody is replying with reaction GIFs in the comments or linking to rickrolls. HN is a unique piece of software that goes hand in hand with the moderation that augments it.


HN is actually an amazing piece of software in its own minimalistic way


By that definition any business that develops an app or website or forum software is a tech company.

Sounds an awful lot like WeWork trying to brand itself as a tech company.


The success of HN is from its moderation, not from its tech.


Are you calling Sam Altman a one trick pony? Quick list of his "tricks": - founded company, raised $40M - proved YC can scale - numerous strong individual investments (Humanyze, Routable) - CEO of OpenAI leading to $1B investment from Microsoft

If allocating capital well makes someone a one trick pony, then I think we need many more ponies in the world.


Even Sam Altman considers his own company to be a failure - you fail to mention they basically sold for as much as they raised so it wasn't a success for most (if any) investors.

Also I don't think Sam proved YC can scale, in fact I think it's the opposite and we just haven't had enough time to watch it play out. I certainly hold YC in lower regard than I used to, and I'm a YC alum (alternate account).

He is a successful investor, I can't deny that. But he hasn't proven anything with OpenAI yet. And I think anybody that actually followed OpenAI from the beginning is really disappointed in how "open" it really is. The fact that you point to fundraising as proof of success is so bizarre. By that logic WeWork should be your favorite company of the last decade.


form is broken lol


Great seeing them expanding from just being Azure resellers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: