Hacker News new | past | comments | ask | show | jobs | submit | derekdb's comments login

I've see this happen to a number of friends. Most of them have enough saving that they can 'retire', but none of them wanted to retire yet. It often means moving away from friends to implement a lower cost-of-living. Once you are >40, it can be hard to find a job. You are too experienced for mid-level roles, but your core skillset may no longer be relevant. e.g. There are far fewer Win32 developers than 10 years ago, but retraining to be a web developer can be a big challenge. I've also seen people get promoted to a managment level, based on a targetted skillset, where it can be hard to find find a new role.


As a customer of serverless, you do not have to manage any servers, that is provided by the platform. Even with a platform like EKS, the customer is still responsible for selecting instance type, OS/AMI and any OS updates. With serverless, the customer doesn't worry about managing their servers, they just enqueue tasks (function invocations) and let the provider worry about right-sizing instances types, OS updates, etc.


Having worked on both AWS and GCP, my experience was that AWS had a much better organizational grasp on how to price services. They track the predicted revenue/costs compared to the observed, and expect each team to have roadmap projects to improve that ratio over time (or at least to keep the ratio the same as they drive down prices). When I was there, Google has not such process for tracking their costs. Engineering teams had much less understanding of their costs as well. I never worked on Azure, but I heard similar stories there to my experience at GCP; that there was no institutional process for reducing costs.

Building top down process to improve costs to enable price drops is one of Amazon's core strengths. It is core to how they run all their businesses.


aws cli cp can copy between buckets.


That’s true but data flows through the machine executing aws cli. The parent was asking how to copy data without it flowing through a compute instance.


I’d double check the cli code path for that parameter. The s3 api does have a copy method, which performs the operation within s3 without compute acting as an intermediary. If that’s not the case for the copy parameter, sounds like a bug that needs to be fixed in the cli tooling.


It is directly checking for s3 to s3[1] and indicates that it wants to copy...

I've read over it and I'm reasonably sure that it's going to issue CopyObject, but it would take me actually getting out paper and pen to really track it down.

The AWS CLI and Boto are a case study in overdoing class hierarchies. Not because there's any obvious AbstractSingletonProxyFactoryBean, but rather that there's no instance that stands out as "this is where they went wrong" and nevertheless the end result is a confusing mess of inheritance and objects.

[1]: https://github.com/aws/aws-cli/blob/45b0063b2d0b245b17a57fd9...


Not to mention the insane over engineering of a python 2.7-compatible async task stealing io loop which is slow as hell, and pitifully delivers a maximum of ~150MB/s with 30% cpu core activity. That's why anyone needing to regularly download/upload files from S3 need an additional library (s5cmd, s3pd, etc)


Thanks! I don’t know why I had the understanding that it worked the other way. This is useful to know!


No, when copying objects between buckets (aws s3 cp s3://... s3://... and the corresponding sync command), the AWS CLI uses CopyObject (https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObje..., previously known as S3 PUT Copy), in which the client doesn't handle any object contents. The call stack eventually reaches https://github.com/boto/s3transfer/blob/develop/s3transfer/c... (or its multipart equivalent), where it calls the botocore binding for this API.


S3 explicitly changed their license to allow copying the S3 API. I forget the year, ~2010? I was working an S3 at the time and it was a strongly debated decision.

Google’s initial launch of their cloud storage copied not just the S3 API, but also the error codes.


I think the point is that if this case had gone the other way this kind of open licensing would happen a lot less. Companies would be obliged to enforce API copyright to protect themselves from loss of copyright (share holder value)


We should be


Some of the cultural issues around avoiding virtual came from hard lessons with v1.0. After shipping v1 they realized that there were a large set of security and compatibility issues with not having framework classes sealed. No-one I worked with really like the idea of sealing all our classes, but the alternative was an insane amount of work. It is just too hard to hide implementation details from subclasses. If you don't hide the details then you can never change the implementation.

I can't say for swift, but there were also real security challenges. It is hard enough to build a secure library, but to also have to guard against malicious subclasses is enough to make even the most customer friendly dev run screaming. My team hit this and it cost us a huge amount of extra work that meant fewer features. vNext we shipped sealed classes and more features and the customers were happier.


Last I checked, Eucalyptus is missing an number of the APIs that customers really use once they are doing more than just hosting a few VMs. Access control, security, VPN... The service providers are actually quite different and any attempt to standardize is just doing to be the least-common-denominator, which is going to be missing a great number of useful features.


Amazon and especially Amazon Web Services are hiring.

Looking to work on a very large distributed system and the opportunity to impact a huge customer base? Come join S3! https://us-amazon.icims.com/jobs/103943/job, for more of our open positions see http://aws.amazon.com/s3-jobs/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: