I did this last year with my kids various activities using ChatGPT and the results were just OK. I'm looking forward to trying this out with Claude this year!
I just migrated our app from Vercel to AWS Amplify.
We’re updating everything to use SSO so we can do BeyondCorp-style auth on SaaS platforms. Vercel wanted to charge >$10k / year to get on their enterprise plan for SAML access, vs bill prior to that was ~$2.5k / year. The migration took ~30 mins and we expect our bill to drop to $500 / year. the toughest part was making sure we didn’t miss any build variables / secrets.
Is this as cheap as running base metal? No. But 30 mins to save $10k / year was worth it. And maybe the bigger point: what value are they really adding if it’s this easy to migrate off?
I think the list I wanted / expected was, “10 things we’re really proud were invented at Waterloo (that you may or may not realize were invented here)”
Had exactly the same experience. Definitely annoying, but more that anything else, I’m impressed that Rachel was able to turn it into cogent blog post.
It's not exactly what you're asking for, but we have a large bucket with billions of files (don't ever do this, it was a terrible idea) and we manage deletions via lifecycle rules. If your file naming convention and data retention policy permits it, far easier than calling delete with 1,000 keys at a time.
Also just a word of warning, if you do have a lot of files, and you're thinking "let's transition them to glacier", don't do it. The transfer cost from S3->Glacier is absolutely insane ($0.05 per 1,000 objects). I managed to generate $11k worth of charges doing a "small" test of 218M files and a lifecycle policy. Only use glacier for large individual files.
I have to ask: what’s performance like for operations on the bucket objects?
Edit: I ask because AWS suggests a key naming convention for large object amounts to ensure that you're distributing your objects across storage nodes, to prevent bottlenecks.
“This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.”
No difference for Put, Get and Delete. Don't know about List, but if it degrades it's not significant. I worked with buckets with exabytes of data and billions of objects.
Never noticed any speed difference due to bucket size. S3 is generally slow anyway (250ms for a write isn't uncommon) but it scales very well and we use it for raw data storage that's not in our critical path, so the latency isn't a problem.
Edit Response: I've always used the partitioning conventions they suggest so not sure what sort of impact you encounter without.
For us, it was due to the relatively high PUT cost if you're storing a large number of small files. We ended up changing our approach and we now store blocks (~10MB archives) instead of individual files into S3. The S3 portion of our AWS bill was previously 50% PUT / 50% long-term storage charges. After the change, we managed to reduce the PUT aspect to nearly $0 and reduced our overall AWS bill by almost 30%, while still storing the same amount of data per month.
e.g. if you write 1 million 10KB files per day to S3, you're looking at $150/mo in PUT costs. If you instead write 1,000 10MB blocks, you're looking at $0.15/mo in PUT costs.
Due to S3's support of HTTP range requests, we can still request individual files without an intermediate layer (though our write layer did slightly increase in complexity) and our GET (and storage) costs are identical.
This has a limit of 1000 keys, does not handle redriving failed requests and offers no report of the job.
What you could do is use s3's inventory report feature, give the manifest generated to batch operations and handle the delete logic in a lambda. A lifecycle policy with some tagging could also fit your needs here.
I used this extensively when I was a student (MIT '07). I sort of assumed all schools did something like this. I still reference the action verbs (page 31) when updating my resume.
Having used Google Maps in a professional context for quite some time, I certainly welcome the competition. Their data is the gold standard, but their Enterprise licensing team is difficult to deal with. One example: We were sold a license that we were subsequently told was inadequate. We were then forced to use a 3rd party broker to negotiate a new deal, and even then the terms were unclear. Also strange because you get 75% discounts at each pricing tier, so we ended up in a tier that gave us "lots of room to grow".
It does but most of the implementations I have tried (Including multiple PAID ones in the app store) don't work well at all. I would settle for locking when I walk away then tap to unlock when I get back.