Hacker News new | past | comments | ask | show | jobs | submit login

I work on quality control medical data (MRI images) and have huge data sets from machines going back over a decade. Most of the useful stuff is extracted metrics (stored in a db), but every now and then we need to pull up a data set and run updated analysis algorithms. We'll usually keep the latest couple of years in S3, and the rest in Glacier.

The data trove is fairly unique, and valuable in being the only of its kind, but we don't need anywhere near instant access to most of it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: