Hacker News new | past | comments | ask | show | jobs | submit login

The pricing model is to help support our continual development costs of the platform, it definitely takes a bit of time to develop the tech, and continually test it with our users and refine the experience so that it's easy to use out of the box for you. We have a pay-as-you-go structure to help make it easier for users that aren't sure about their usage yet to try it out at a very low cost, and decide if it'll work for them, we didn't want a steep initial pricing structure to be a barrier of entry for people exploring ML. The requests are loosely enforced through our central key server (though there is no request rate limit). While absolutely no training/inference data leaves the container, we do occasionally send back usage metrics to help keep track of usage. If you're interested in a totally isolated solution, we can talk about having on-prem deployed key servers so that your cluster can be completely isolated.

I hope that clarifies everything, let me know if you have any feedback or further questions :)




Usage based pricing seems very misplaced for a self hosted solution. Questions of enforceability aside (all I need to do is remove the telemetry code to get it free), the issue is that usage based pricing is meant to scale your revenue alongside costs. When your customer is hosting the infrastructure, you have the same costs regardless how many requests they make, so as a customer, it doesn’t feel right to pay you for each request that my own servers are solving.

A competitor can easily undercut you here. I would also be interested in hearing of any other company that charges per-request for self hosted software, because it’s certainly not a model I’ve heard. Typically the way to approach this is a licensing fee for running the software self-hosted.


I don't think usage based pricing is esoteric, if you look towards products in the enterprise space (Gitlab, Splunk, Mongo, etc.) They're all based off of a usage metric of some sort (in our case API requests). We're making on-prem ML accessible at SaaS prices (try negotiating a on-prem contract at other MLaaS). Our costs are continual as we continue to improve the product over time, and those improvements will be passed down to you as a user.

If you're interested in running the product on a licensing fee instead of pay-as-you-go, feel free to shoot us an email at hi@modeldepot.io :) We found that licensed pricing model to be more restrictive for new ML users and increased barriers to entry. If you're not interested in using our paid product, you can check out our primary pre-trained ML platform at https://modeldepot.io/browse and hopefully find a ML solution that works for you without paying a cent.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: