Hacker News new | past | comments | ask | show | jobs | submit login

Your post makes me think of this article which pretty comprehensively reviews it

https://cloudacademy.com/blog/google-vision-vs-amazon-rekogn...




The scenario-based costs table at the bottom is particularly interesting. If you need to run the detectors on all incoming images of a site ingesting, say, ~10 million images per week, you're looking at nearly $15,000/week, or close to $800K/year with AWS, and nearly $1.2MM/year with Google, and either way you'll still probably need to employ some type of backend engineering team to maintain the wrappers for calling it and manipulating the response data.

We were able to solve this in-house using only about 15 8-core servers (we used a few general GPU servers for training, but found we only needed CPU machines for runtime, and did not have the poor latency of the AWS calls) with quite a lot of redundancy for traffic spikes and a pretty easy deployment system if we needed to add or remove nodes, and it was only one of several dozen different machine learning projects going on, so whatever portion of the total cost of operation could be allocated to paying the salary of a machine learning team, it would further be amortized by work on many projects. For example, amortizing the cost of GPU machines for model training across a variety of projects.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: