For us, it was due to the relatively high PUT cost if you're storing a large number of small files. We ended up changing our approach and we now store blocks (~10MB archives) instead of individual files into S3. The S3 portion of our AWS bill was previously 50% PUT / 50% long-term storage charges. After the change, we managed to reduce the PUT aspect to nearly $0 and reduced our overall AWS bill by almost 30%, while still storing the same amount of data per month.
e.g. if you write 1 million 10KB files per day to S3, you're looking at $150/mo in PUT costs. If you instead write 1,000 10MB blocks, you're looking at $0.15/mo in PUT costs.
Due to S3's support of HTTP range requests, we can still request individual files without an intermediate layer (though our write layer did slightly increase in complexity) and our GET (and storage) costs are identical.
I have billions of files. What do?