Either Lightsail, or just the default VPC that all AWS accounts come with.
If you think your use case needs private/public subnets, NAT, complex routing tables, etc. you can still add that to the default VPC later.
A few years ago, when what we now call EC2 classic was a thing, there were no sophisticated networking options at all. I assume VPC and its services were introduced because customers asked for something that feels like traditional datacenter networking.
This stuff is obviously so hard to use that even the experts don't know wtf they are doing.
At a prev shop, they undersized the VPC had sporadic failures of burst compute like spot instances that got spun up by other AWS services (Batch) to work through job queues. Neither the cloud architect or CloudOps lead could resolve this for months, or even temporarily prevent the breaches, until a big VPC resize / reshuffle / migration over a weekend. Probably $1M/year TC between these two guys. Incredible stuff.
Felt like on-prem kind of stuff that I was reliably told the cloud solved.
> Behind the fear of releasing is often the fear of exposing your work, and yourself, to criticism.
When I plan to release my project's source code, it has helped me to release parts of the project up front as single-purpose libraries. This helps me think in smaller chunks of work, and makes me feel like I'm finishing more often. It also shortens the feedback cycle.
The headline could also be “Google is deprecating older Widevine CDM versions”, which is a regular process.
Whether or not the playback error rate of your product significantly spikes depends on the number of older non-Chrome browsers that are using Widevine. Probably mostly Firefox installations that don’t receive regular updates by their users. Chrome itself is doing pretty well with pushing updates for plugins such as the mentioned CDM.
Analytics tools will certainly help to determine how many users will be affected after December 6. Perhaps something for Bitmovin to provide a smart metric for exactly this.
The public Go code I usually read are libraries and I admit I also noticed only a little adoption of generics.
My personal hypothesis is that many libraries follow the official Go release policy[1]. Due to the fact that generics were introduced with v1.18 and we now have v1.19, libraries would be bound to the features that were available with v1.17, unless they spend effort on working around this by implementing the respective build flags that make generics available exclusively for >=v1.18. However this is just an assumption.
According to the Go Developer Survey 2022 Q2[2], 1 in 4 respondents said they've already started using generics in their Go code and 14% have started using generics in production code.
The official Go release policy is to only support the 2 most recent versions, currently Go 1.18 and 1.19. So libraries can strictly follow this policy and use generics without any need for build flags.
That said, libraries often want to be friendly to older Go versions, so I agree with your overall hypothesis.
Backwards compatibility is another major concern for libraries: I assume you can't safely change the signatures of existing functions from interfaces to generics, for example. (Well, aside from some special cases like interface{} becoming "any".)
Honestly, 1 in 4 sounds pretty in line with Java and C# codebases which have a similiar conservative culture (assuming "using" = "writing generic classes/functions", and not consuming, which I think is a pretty safe assumption as Go already had generics for the consumer in the built in map/list/etc)
Also: many libraries have little or no use for generics, and even if it's some benefit it's still worth asking if it's enough benefit for the added complexity (e.g. replacing "any" with generics is probably good; replacing "string" or "int" with generics: not always).
Also also: in quite a few cases switching to generics is a backwards-incompatible change or an "ugly" change (similar to Foo() and FooContext()), which makes adding them to existing libraries a bit painful.
For me, a language or tech stack is not more than a tool box to achieve a higher goal, namely solving specific problems. While I do enjoy building software, "writing code" or mastering a particular tech stack is only one component of several things that make software engineering enjoyable.
One of these other things is understanding the kind of problem I want to solve and its environment, by collaborating with stakeholders and domain experts. As soon as I understand their expectations, I decide on the tech stack.
That being said, if I want to make an estimation which tech stacks are likely to be relevant in the long run, I would start with understanding which business requirements will be relevant. Based on that (and maybe more), I can assess which stacks could be a promising fit.
And if I'm wrong, don't neglect the experience I make mastering or learning a tech steck.
Netflix is able to track when viewers jump off in an episode. I’m wondering whether this information will be used soon to retroactively optimize the story line.
Actually this is Hetzner’s second attempt offering Arm-based dedicated servers. They launched the AX10 model[1] in 2015[2]. It seems like they started to use the “AX” prefix for the AMD-based line later.
Fully agree here: I don’t expect anything else but reliable and performant HTTP caching from a CDN like CloudFront.
Request manipulation is not the duty of a cache - even though other CDN providers mix request manipulation functionality with caching. In my opinion, they don’t need to be in the same product.
If you still need request manipulation, because you don’t control the origin or you don’t want to introduce another service between CloudFront and the origin, you would use CloudFront Functions, which is cheaper than Lambda@Edge and easy to set up.
CloudFront Distributions cannot pass the request to CloudFront Functions before sending to the origin. In other words, they cannot be used to modify origin request/responses. They can only modify the viewer request/responses. [0]
Only Lambda@Edge can help the scenario which I provided, which is also AWS's recommended solution. [1]