Hugging Face is early in the Silicon Valley enshittification cycle - currently burning VC money being incredibly good to it's users. Next would be shifting that value to its business customers. Then clawing back and squeezing as much value as possible from those. Then collapse.
Although, is it technically a platform / marketplace?
They represent a real risk to the AI ecosystem IMO (or opportunity if you're an investor). They have really cornered the market on model hosting as well as frameworks for running them, it's going to get ugly when they start turning the screws.
I think it's important for people to diversify away from them and not build anything that uniquely depends on them. It's not good to have chokepoints like this.
Diversify to what, though? The economics of hosting models and datasets seem to just not be great in general. For the same reasons that we can expect HF to eventually crack down on the "freeloaders", I think pretty much any other entity also will unless it's explicitly set up as a public benefit. I could see this sort of thing maybe being funded by universities or their affiliated foundations, but most companies with the resources to provide this kind of hosting also have a pretty strong incentive to extract revenue from it.
Pip, maven, npm, docker hub, apt-get, github and suchlike seem to manage to provide hefty binary downloads with no need to monetise anonymous downloads.
I don't quite understand how they do so, but it seems to be possible. Everyone who downloads a 5GB stable diffusion model probably downloaded 5GB of pytorch+cuda+cudnn first.
For its part, PyPI is apparently funded by the Python Software Foundation and run by volunteer admins, with the bulk of its data served gratis by Fastly. This is based on a blog post about its infrastructure from 2021 [1], which also says that the list price for the throughput they're doing is $1.8 million per month.
Back in the day, universities would host all that shit via their mirrors for free for everyone. Not sure where it all is today with the cloud being so prevalent.
Linux distros still have large networks of mirrors for downloading their releases and also packaged binary updates. Maybe we don't "hear" the term but they are still there.
Do they really? As far as I can tell, diffusers/transformer are open sourced wrappers around torch implementations, there are a bunch of companies that are offering inference (just like they are), and there is some value in offering S3 storage to allow for model search.
If they reach a point where they actively become community-hostile, someone will just fork their codebase and release a web app called "FaceHugger"
Their Git LFS based hosting is used by a lot of AI tools. Huggingface level data storage and transfer would also be VERY costly to accomplish for a small party. Some of these models are several gigabytes in size and downloaded hundreds or thousands of time per day.
What they do isn't too hard to replicate, but the price for which they do it is impossible to compete with.
Some models are distributed P2P. They download slower, in my experience, and require special software rather than the usual git commands.
P2P model distribution isn't impossible, but it's a lot less interesting to many people, or they would've already spread their models through torrents in the first place.
CivitAI has somehow managed this over in Stable Diffusion land.
In fact, I think the whole community is HF averse. The two most popular frameworks are based around the Stability implemenetaion and file format, not HF diffusers.
FYI the file format, safetensors, was proposed, developed and maintained by HF, and involved people from groups such as Eleuther and Stability for external security audits.
I think they're HF averse because HF is fairly anti-NSFW content and it's pretty clear that civit.ai has a fairly large NFSW focused audience.
On top of that HF is just hard to use for your average user. Civit.ai is just "click to download" while HF is "look here's a broken model card... you can figure it out from here".
Despite the cute logo, I think most people find that HF comes across as fairly anti-user. Despite having years doing ML related work, I still find HF a bit byzantine to navigate.
Technology-wise, path dependency from stuff that was built out before HF diffusers were available, and on which more continues to be built in the ecosystem.
If you are doing a greenfield project that is mostly standalone, its not an issue, but for existing popular projects and the communities of satellite projects around them, the switching cost is high.
In terms of content hosting, well, a fair amount is hosted on HF, but there is a difference of content focus between CivitAI and HF, and a lot of what CivitAI hosts HF probably wouldn’t want to. Also, CivitAI has a UI focused on the narrow space of imagegen, whereas HF is more general.
Automatic1111 and ComfyUI implemented the SAI backend/format in the early days because thats all there was, and now they are stuck with it. The intertia is tremendous.
The popular fork of Automatic1111 SD.Next recently implemented diffusers for its SDXL support.
It seems like an odd move to me as what makes Auto1111 so appealing are the extensions and I’m assuming that breaks an awful lot of them, but what do I know?
I've seen some repos on Github host the models on "Releases". Not sure how viable that is long term, but definitely easier to download and access than some random Google Drive link which is typical for a lot of other repos.
I don't know much about it, and I think (but couldn't quickly confirm) it's open source, so your point about a fork still stands, although I don't think that solves everything. If it did, people wouldn't care that e.g. Hashicorp changed their license away from open source.
You can replace this with "container" and "docker", and it turned out exactly or similar to how you predict. However, there is nothing with a big moat that Huggingface is offering. If they overeshittify, competitors will rise up.
Late in the "enshittification cycle", competitors do rise, but they're usually under-resourced. So you get a overly-monetized primary product and a bunch of "never quite there" competitors. That's overall worse than at the primary product's "free beer" phase.
The most likely successful entrants I can imagine are big tech. They’ll either build or buy the best competitor. And they all already have data centers ready to go
Similar to GitHub which I imagine they want to emulate. Sure there are other public code repositories but most people find GitHub good enough to not bother looking at options
When I use HF to get some base model I always make sure I have local copy and then load from the local copy. I push said copy into (remote, obviously) git for the eventuality of HF going down/model being removed.
Basically all the HF code is Apache and open source.
Model hosting is a nice convenience, but if HF removed every single repo from GitHub tomorrow and paywalled the model repo, it wouldn't be a big deal. Maintainers would clone their models and repos to somewhere else.
It'll go the same route as Docker. Anyone relying on it for critical software will pay, everyone else will use alternative but compatible sources once it's no longer free.
I think with Docker's recent pivots, its clear that they thought they were going to be the container company, but Swarm never caught on, and Dockerhub never really evolved beyond hosting images (though, its verified images feature is nice).
What Docker ended up accelerating at is reproducible development environments that tear down and spin up easily. I think this is the largest faction of docker users.
It never translated into a high volume of Docker Swarm and related sell through features materializing though.
That's what a swath of all their new paid features are focused on enterprise things like SSO, auditing etc.
For Hugging Face, they will need to have a compelling set of features that make it either hard to migrate off of or vastly preferable to other options the majority of the time.
Right now, their most compelling feature is being (mostly) free
i'm curious what HF does, because iiuc they're mostly just hosting model files? i know that they have some compute offers, and also created/maintain a few libraries, but it's not particularly widely used, i'm really not sure how they're supposed to earn money...
I've been asking myself that for a while. Would they be so popular if they didn't host & serve petabytes of model files for free? How will they monetize that aspect to match this valuation?
Whilst the scale of their model hosting is impressive, the functionality seems pretty basic. The models are just BLOBs in git LFS repos, you're usually relying on knowing which users to follow, then learning the way they name their models and how that naming convention applies to the particular framework and hardware you're using.
As an example the user "TheBloke" is prolific at publishing LLM models for various hardware / framework combos, but look how little the HF interface actually helps navigate or find what you're after: https://huggingface.co/TheBloke
Also, is anybody using HF Hub in "production"? We've deployed a few LLMs now and once we've decided on a model the first thing we do is get it off HF and into our own storage ready for deployment. There seems no reason to tightly integrate HF Hub with production systems given it's a just a bunch of files you can copy and keep.
> I've been asking myself that for a while. Would they be so popular if they didn't host & serve petabytes of model files for free? How will they monetize that aspect to match this valuation?
From what I gather it's mainly hosting with some maintaining of a few libraries, but then I recently starting to see they are offering lots of classes via DeepLearning platform; these lat couple of months as an AI student I've been asked to enroll into classes (temporarily for free) to assess where they are, here is the most recent example [0].
To what end, I'm not entirely sure, but I guess it's to get an overall pulse on what can be monetized and take it from there? I really think that this a low number compraed to where we were in the last few years, but ti also shows how little investment actually exists in the AI and ML space: consider that Git got bought by M$ for 7.5B and then tried cash in on it by releasing Co-Pilot and then got into legal issues as a result and then tried it's hand with Open AI and $10B.
This is starting to seem like a reversion to the mean, and AI's promise was always to lower the cost to everything it can, but I think one of the harder pills to swallow is that the traditional VC model is not really being supported after all the hype and losses.
Personally speaking, I'm thinking of moving on to Cyber Security after my finals this semester; I come from Bitcoin and the hype cycles there are something I was looking to get away from after nearly 13 years in that side of fintech.
Some of the models in TheBloke's repo are optimised for things like ggml, which runs well on Mac chips. Not sure if that's what you mean?
Have been experimenting with deploying to Mac's for inference as they are cheaper and use less power. But also still deploy to a substantial amount of gpus too.
Either way I try to take connections to huggingface hub out of the equation at deployment time.
Thanks for responding. I was curious if there are any alternative to OpenAI in terms of pricing. Deployment of a capable model starts at around $80/mo and I am not sure how many requests it can handle then. Either way it is a lot higher than GPT-3.5.
Those are pretty lackluster in my experience. Do they even support GPUs? And in any case, it seems like they go down for anything sufficiently popular.
Can someone who knows the VC industry explain why having 10 years runway would be considered good? Aren't these companies supposed to be deploying capital as quickly as possible, not banking it. If a ceo said they needed your money 5+ years from now, wouldn't you wait until then to commit?
> If a ceo said they needed your money 5+ years from now, wouldn't you wait until then to commit?
Why would a CEO give up equity if they don't need the capital? Wouldn't they prefer not to sell big chunks of flesh?
If he's not deploying it now, maybe the CEO is predicting rough times ahead and wants to de-risk for future cash flow or fundraising turbulence? Isn't that, too, a kind of negative signal? Or is this all prudent?
No, you typically get all the money wired into your bank account immediately after the deal closes.
It's possible to structure a deal where the company has to reach a series of milestones to unlock each additional tranche of funding, but this is less common.
Notice the choice of words: "Most of our OSS and free usage cost". Who knows what's omitted from that. Also assuming they have zero money left in the bank from the prev round (they def didn't) they're losing/burning $2mm a month
It seems the enshittification cycle may have resulted at least in part from the Blitzscaling methodology [1]. Are there any studies on the effectiveness of Blitzscaling in general? I wonder if it's a net success or not.
what you do not know can hurt you ... behind closed doors they have added the equivalent of Airport Security on users, repos and access. Expect a profit center of maintaining your valid ID so you can start work each day.
enshittification is such nonsense. the original article gives Facebook, TikTok, Amazon and other super successful super profitable companies as an example
"Then collapse" - nope, then generate tens of billions of profit per quarter. Im sure the people of Hugging Face would love that
"enshittification" doesn't refer to a change in _investor_ value, but to _user_ value. Facebook does make a ton of money, but at the expense of the user experience. They got huge by providing a great UX and making no money; it was the need to make money that triggered enshittification.
> "enshittification" doesn't refer to a change in _investor_ value
"enshittification" doesn't refer to anything at all. It's a political term meant to be used by anti-corporate causes - it doesn't have a real definition because it's a made-up word with no value other than activism.
Is anti-corporate a bad thing? Is activism also bad? Aren’t all words made up? The word does indeed refer to something. Just because you disagree with it doesn’t mean it doesn’t exist.
Activism is fine. Not on HN. Words are made up all the time, but "enshittification" is currently nothing more than a trendy piece of jargon that basically only exists on HN (and maybe Reddit), and is definitely not appropriate here.
> The word does indeed refer to something. Just because you disagree with it doesn’t mean it doesn’t exist.
That's irrelevant here. Virtually everyone would agree that the word "libtard" also refers to something, but very few would actually appreciate it being used.
I love when Nvidia invests in AI companies. They know that money is coming right back to them. It's basically just a loan to the company in exchange for potential upside of them doing something with Nvidia's chips. :)
"Can we have the investment partly in stock?" "I guess, but our shareholders probably won't be too happy." "No no no, not shares. I mean stock like from a warehouse, we need them cores man!"
Interesting situation with public markets funding Nvidia which is investing in its own customers, driving further strong financial results driving further public investment. A high risk high reward play. Nvidia is eating AI.
Nvidia is likely to have a $36-40 B revenue year this year and next because of massive customer investments. Meta is spending $8 B on their gear and about the same next year. OpenAI+Microsoft are likely following suit with several billion in server buildout. I'm curious if Google or Apple will be adding large swaths of AI boxes to their fleet too.
Ridiculous. This implies infinite supply. Neither the supply chain, nor nature is designed to churn out GPUs like this. That $ will take time to be captured.
throwaway since since i used to be affiliated with them.
HF did an amazing job in community building, transformers library and being the central store for all oss models. That said they are ages away from PMF and just have a bunch of different products non of them commercially successful (services, autotrain, quantization, HF hub for EE, inference end points etc). The majority of their revenue comes from partnerships with SageMaker/Azure where they pay them for sending users their way which wouldn't continue to grow.
While it's always a possibility for a FANG company to buy them IMO they are completely screwed. At a $4.5B valuation they will have to reach at minimum $250m in ARR to IPO and at the moment they're probably stuck at around $25m ARR.
> Its revenue run rate has spiked this year and now sits at around $30 million to $50 million, three sources said — with one noting that it had more that tripled compared to the start of the year.
I can't for the life of me understand the strategy that Clem and team have, other than raise as much money as possible just because they can. My experience with their sales teams was just absolutely awful, and it gave me no hope that they can grow ARR as soon as they need to. We practically begged them to sell us something, but it wasn't until we went elsewhere definitively that they seemed interested.
I wonder if they, intentionally or inadvertently, have optimized for investment scale at the expense of everything else? There is a market demand for "pure" AI startups that can take 9 figure investments. It's (comparatively) "easy" to optimize to be that company, but extremely hard to find a way to live up to the hype and provide a return in the investments, and this strategy often ruins the company.
Branding and trust. ML folks love HF, and rightly so. They have great software libraries, host everyone's models, generally great UX all around. They do a great job of making it easy to do cutting edge stuff, while still making it possible to do bleeding edge stuff. They appeal to basically all ML folks in a way few other companies do.
Google could grow GCP by more deeply integrating HF into GCP, I think, while retaining as much of the HF brand and interface as possible.
Big friction point might be ethics, since HF is still seen as the "good guys" and where some folks who left Google over ethical concerns landed.
MS makes sense since they could integrate it with github and azure into an ML platform that they could sell to their enterprise consumer base. It would also be a great addition to teams, bing and word suite.
No that's what I was getting at. For roughly half the models I come across, the weights are published on HF. For the other half, the weights are available via GitHub (either the releases section, or a link to something like Google Drive).
Yeah that was my concern as well. I used to work for an AI company that ended up in the same predicament, they raised way too much money at a high valuation and in the process cut down the potential acquirer pool to 3-4 companies. They are now a zombie with worthless common shares, no shot at an acquisition and nowhere near enough revenue to go public.
I wouldn't be surprised if they would eventually get acquihired by nVidia. Simply providing all that code and infrastructure to the community is a big boon to raise the value and sales of nVidia hardware; the classic strategy principle - commoditize your complements.
Genuine question … What business are you in when selling AI/ML?
I’m far from being knowledgeable in this space, but it seems like AI/ML “is a feature, not a product”.
And if that’s the case, what business are you in when a company sells AI/ML?
Are you in the business of licensing the model you created? Charging for the output? Hosting infrastructure? What exactly are you in the business to sell?
To use an analogy, if you’re selling AI/ML, are you in the IaaS industry, PaaS industry, SaaS industry (or something else?)
They are selling picks and shovels. There will be a few huge winners in the AI space, a good amount of modest winners, and a lot of losers. They don't care who wins or loses but are happy to sell the supplies needed for anyone that wants to take a shot.
If they can make it easier and worthwhile to use their product and create business value, then they will sell a lot of picks and shovels.
I agree and so far I think it's picks and shovels all the way down. Most of the "AI" companies I see sell tooling, often to "help your AI team work faster|better|spend less time doing x|whatever. It all presupposes there are end uses, which are few and far between, certainly almost none are being sold. AI is still in the hype mode of companies having internal budget to "invest in AI" and so that's who the AI companies are targeting, as opposed to actually making products that do something. See also blockchain wallet companies.
PS - I'm optimistic about AI, but it's important to be realistic about the current state of the industry
It's important to note that despite the recent excitement about LLMs, which is still an emerging technology, "AI" is not a new market by any means, nor are major companies only now investing in it. For the better part of a decade, ML has been widely adopted across industries, and the average person uses an "AI" system many times in a given day.
For example, if you open the home screen on the average smartphone right now, you'll see apps like:
- Delivery apps like Uber, Lyft, etc., whose recommendations, ETA predictions, driver matching, and more are built on ML.
- Media apps like YouTube, Netflix, etc., all of whom rely on models for recommendations.
- Email apps like Gmail, whose filtering (both spam and categorization) and text completion are based on ML.
- Photo apps like Instagram, Snapchat, and even your phone's basic Camera app, all of which use computer vision.
If you Google anything, you're perusing the output of a model. If you're being recommended something on basically any platform, you're interacting with ML. If you ever use speech-to-text, you're using a neural network. Your bank uses ML for fraud detection, your posts on social media are moderated by ML-based content moderation, and if you have a car with any recent-ish sort of lane departure assistance, you're driving with help from a neural network.
Most of these companies have large, mature ML teams, whose outputs represent massive amounts of revenue. Hence, they represent a legitimate market for selling picks and shovels.
That's the framing we've been using for years, look at some big tech ML or data science that we reframed as AI, your manufacturer/transportation/retail/whatever company needs it too, but there are no products for end uses in your industry, just lots of tooling and consulting services you can buy for bespoke projects.
Indeed, lots of tooling out there. Which makes sense as it's very early days. I imagine that LLM AI will be integrated into so many things down the road though that it will be an expected feature of most any product.
- A model isn’t a tool, that’s a purpose built offering that can be used for 1 use case. Tools are by definition, intended to help with broad/general use cases.
- Hosting isn’t a tool, it’s a commodity and lots of cloud players already occupy that space.
Please don’t take my comments as trolling, I just geninuely don’t understand what exactly is being sold that is new or unique that doesn’t already exist.
They have a suite of software you can use to power your own software. These are tools and no different than what database vendors, etc in terms of being software you use to build software. In addition there’s support and services.
I imagine they have big plans on how to expand and grow the business.
Yep. Meta is spending $22 B on CapEx over 2 years without any plan of how to use it, with about half of it tied to buying shit tons of H100 to go in custom servers.
> a) a hardware vendor nvidia whose products are needed by anyone in the game
FTFY.... I don't think AMD will be competitive in AI until at least 2025, though given their broken roadmap promises, not sure if anyone will be willing to invest in them
I mean you still need CPUs to run your workloads. And AMD makes decent ones. I work for a company that sells data centre network switches. There's nothing AI specific in our boxes but we will sell a lot more if them if people are building more data centres to cater to AI driven demand.
Great point! But while demand for network switches explodes, do you really get a competitive advantage from one company's switches vs another? (serious question as I've never had to build a DGX superpod or anything like it from scratch).
> do you really get a competitive advantage from one company's switches vs another?
I am obviously biased but IMHO, yes. High end switches are not commodity gear. And a lot of the differentiating secret sauce is in software. So, even if two switches from two different vendors have the same silicon, they might have dramatically different characteristics. That is without going into other aspects like integration with the rest of the stack, quality of support and even lead times.
A) so you’re competing against aws, azure, etc. That sounds brutally tough.
B) re: value adds, like what exactly?
C) this seems like such core functionality that a company wouldn’t outsource this to a 3rd party vendor. If that’s the case, there isn’t an opportunity to sell anything if you’re that AL/ML vendor then.
A) aws/azure need nvidia/amd chips in their servers... it is too difficult to homegrow these chips (altho aws is trying)
B) microsoft (github autopilot 20$ per month per user), document/image autogeneration from a prompt (likely in word doc 5$ a month per user for this)
C) right yea, thats why fb/goog have big ai farms
I believe that some of the most valuable short-term use cases for AI/ML seem to be "feature, not a product" - this means that the incumbents making various products can unlock lots of competitive value by adding those features in a way that is difficult for newcomers, because they not only need to develop the AI/ML feature but also have to build a competitive solution for the core system which generates or contains the data on which the feature relies.
Like, it's plausible that many organizations would be willing to pay lots of money for a "ChatGPT which knows my internal documents" AI/ML feature for the Sharepoint/Confluence/etc they are currently using, but a would be very wary of migrating those internal documents to some upcoming startups' new document management system.
To be fair, I don't think there are clear dichotomies in this space. It really has the potential to be a feature or a product in either of the IaaS/PaaS/SaaS industry.
Just do a quick HN search on companies selling AI/ML, you will see what I mean.
The further you keep the user away from training and hosting, the less you’re in those businesses. But I would think it’s only economical to do that if you have some type of advantage in implementing the things you’re abstracting away.
I think the opposite may be true. If you control the training and hosting, surely that means you have to be more involved in many industries to make the product useful? Whereas with offloading training/tuning to the user, you kinda just become PaaS, in that you provide a platform which enables others.
What do people think separated GitHub from docker?
Docker also had DockerHub, but it wasn’t as necessary as GitHub and didn’t catch on as well.
I think HF is more like docker than GitHub. GitHub had the wide scope to be huge. Docker was too niche (in comparison) and didn’t catch on so much to be necessary for every single project like Git did.
Similarly, HF is also niche in the sense that docker is. There’s no need to host your model on HF, it’s just a convenience for some projects. It’s effectively a package manager for ML models.
It seems that if there’s a way to make models more easily runnable, HF would not be necessary at all. They have a community there, but it’s not something that can be monetized well.
Dockerhub is great for sharing images with the world at large, much like HF is for models. But once you're at the point where you want to host your images / models for business use do you really still want them?
All cloud providers seem to have an image hosting feature, why go through hassle of managing and paying for a docker hub account when you're already on AWS and can integrate better with ECR? The same problem exists for ML model hosting IMO.
Github is totally different, it's a more complex and human proposition. For many github has replaced an entire raft of tools, not just code hosting.
Docker grew too fast. They chased VC dollars for what really was just a cool open source project. I spent some time at their first office in SF ten years ago for meetups. They had a really cool unique culture. I think that just doesn't scale some times.
They also have an insane amount of traffic - very envious, they'll be able to capitalise on other things as soon as they start to build commercial products people are willing to pay big money for.
>What is the moat? "We will run your inference" can't be the answer.
What was the moat when AWS EC2 launched in 2007? There were already thousands of little mom-and-pop VPS companies that would rent you a virtual Linux box with root permissions for $15/month.
ML models are the new apps. There is a huge opening for an App Store type situation which enables people to buy models and integrate them into their products, handling the appropriate licensing.
Bonus points for certifying the models actually do what they say. That by itself will probably become a mini industry.
Hugging Face are the clear leaders in terms of having mindshare in the community to be able to build it.
Models != apps, they’re closer to the backend for apps or a core library, so your customers aren’t “everyone with a phone/computer” like an app, it’s the much smaller, but potentially high impact, “everyone who creates apps”.
We’re not in a world where non tech people are searching for models in the model store from their phone.
From Nvidias point of view, this might be an investment in their own business via network effects rather than a straight up investment in Hugging Face. I hope so, because when the VC money goes, the billing will go through the roof.
The whole GTM is confusing. They're experts at ML and community, and awful at business. I know HN looks down on sales persons, but HF needs some good ones, ASAP.
To add further, i do not know what is end goal of hugging face.
1. They have inference API but all cloud provider can implement those in next year.
2. They offer subscription but market-size of subscription is questionable.
3. I hope this new set of funding don't bring problem to them because making money with open source is hard and at this scale of funding it might be even harder.
It see what happens in next set of years.
GitHub for ML models? If they can anchor themselves to the open-source developer network, their acquisition value is in a tech giant being able to cross-sell to that community.
Take money off the table, that’s what all the startup gurus call it, but whose money is kinda the question. Have pension funds stopped chasing VC funds etc?
The companies that invest have products (or have other investments with products) that have a strong market - they end up increasing the price of products to cover the purchase of acquisitions and losses. This results in inflation and worsening wealth distribution.
The article is quite clear on whose money it is - "Google, Amazon, Nvidia, Intel, AMD, Qualcomm, IBM, Salesforce and Sound Ventures".
For at least half of them, it might make sense to simply sponsor accelerating worldwide new AI product development because those new products will increase sales of their hardware and services; pitch in $20m together with others to support efforts like Huggingface, while expecting that their tools will indirectly cause an extra $1b sales for you in the next few years.
Yes, they have. Higher bond yields (compared to 0% three years ago) is causing pension funds to shift their allocation away from "alternative" investments (vc & pe), so it has become harder to raise a new VC fund these days.
Probably the same as most privately funded tech startups 1) Gain user base 2) Take investment funding 3) Sell and make the people at the top rich 4) Repeat.
Buying a lottery ticket is buying a dream. You too can raise like this if you sell a dream to people with deep pockets who will only be mildly perturbed when it goes bust.
Oh. When I last read about it I thought it was meant to be sustainable.
I hope the founders take their share home and have fun burning the rest. At least hopefully some open software/AI models will have come out of it when the company collapses under its own weight.
If you’re hosting for your own use, unclear why you wouldn’t just do that directly on cloud instances. If you’re trying to share models with others publicly, then HF is the best option right now.
What does X.ai seek: democratization or centralization?
If it wants everyone to fork its models, having the model on Huggingface and using its libraries will increase adoption, as people are used to that format, having the quantization built-in, etc. Having to perform model conversions slows things down, despite TheBloke’s constant efforts to convert every format to every other.
If the model will only be accessible through CLI, APIs, or a website UI, then custom can make sense.
I am curious however, why name drop Elon and X.ai when asking this question?
With all due respect, I'm not sure how that is going to advance the technical conversation, and in some regards may indeed de-rail it and hinder the mentoring you might receive?
I hope I do not sound combative!
I am generally curious, as I see this behavior at $JOB, but in reverse. "My customer is seeing this." "I have a customer that is asking Y."
I am generally curious on the different perspectives that teammates can come from, that can drive them to take wildly non-congruent approaches, even with similar career experiences.
>I am generally curious, as I see this behavior at $JOB, but in reverse. "My customer is seeing this." "I have a customer that is asking Y."
You work at a business and you think it's name dropping for a colleague to bring up specific customers' needs?
I'm trying to see the steel man version of your argument, but I'm not getting it. GP is working for a well publicized AI startup and shared that they are evaluating HF versus alternatives. I thought it was useful information, and it's germane to this thread.
I don't know what this could ever do that isn't easily reproducible with a tiny fraction of that investment. Fragmentation of communities is an absolute given in software.
Added to the fact that of they get too dominant they'll get leveraged out of the supply chain and I'm not sure what the value proposition is here.
I mean obviously I'll be wrong but it's hard to not be skeptical
The real question is why the company doesn't use .ml as a domain name alias.
Edit: dang, guess HN strips out emojis from posts. The Punycode version of the Hugging Face emoji on a relevant TLD supporting emoji domains would be http://xn--zp9h.ml/
They have a VC arm that just looks for and does investments which doesn’t necessarily means salesforce proper is interested in the tech (though I’m sure there is some of that), kind of like Google Ventures.
Really, throughout the entire funding process no one raised the similarity of "Hugging Face" to "Face-huggers", the larval name for the Xenomorphs from Alien?
Although, is it technically a platform / marketplace?
Anyway for now, enjoy the free beer from the VCs.