I see this at work, being within such a customer. People driving these mandates barely understand IPv4, let alone know what IPv6 is. They're software developers after all, not CCIEs.
There's nothing preventing you from having a private network using unique address space that's either blocked from accessing the internet via a firewall on a router or just plain not even routed. You could even use ULA networks with stateless prefix translation to avoid using GUA addressing for your private network.
The sad part is that IPv6 support is abysmal on every cloud so just migrating to it imposes serious limitations as addressed by the blog author.
One of the huge sells of ShadowPlay is that it doesn't place extra burden on Windows to capture the screen every frame, instead it directly captures the actual framebuffer (and encodes the frames) directly on the GPU, without having to switch or bottleneck any rendering code paths. It wouldn't surprise me if people who swear by ShadowPlay can tell the performance difference compared to software screen capture.
Basically it aims to replace the need to for an additional out-of-band capture card (those can get very expensive, especially if you want 4k60).
I don't think you're going to be running into much performance difference between the two. People say GameCapture already grabs the frame buffer, but I can't say whether that's true or not.
Most likely, different framebuffer - ShadowPlay grabs the exact frames sent to the physical monitor while GameCapture intercepts it in software (most likely anyway, if it doesn't use the Nvidia-specific APIs that ShadowPlay does). The former doesn't require inserting any extra steps into the rendering pipeline, so it's generally faster (sometimes by a lot).
Unfortunate, DSC works fine on apple silicon. I did notice displayport-related regressions on my old 2015 mac way before big sur though, Apple's track record is not great.
Yea, the Apple Engineering Department asked me to run a ton of diagnostics, both on Catalina and Ventura, then came back with “Would you ask Dell to confirm their monitor is compatible with MacOS Ventura specifically”. Lol
Doesn't really change anything for the end-user that wants to access the website and is bummed that it doesn't work. There might be politics in the way but all they care is that it doesn't work.
Continent-level information doesn't exist. EDNS Client Subnet doesn't send a location, it sends a subnet. Its "location" then has to be looked up in geolocation databases which may or may not be accurate. There's no subnet that will map to a continent.
macOS will render at the next highest integer scale factor and then downscale to fit the resolution of your monitor instead of just rendering at the fractional scale in the first place
There are several scenarios where it clearly doesn't look that good, and where Windows objectively does a much better job.
Most people (and companies) aren't willing to spend $1600 on Apple's 5K monitor, so they get a good 27" UHD monitor instead, and they soon realize macOS either gives you pixel perfect rendering at a 2x scale factor which corresponds to a ridiculously small 1920x1080 viewport, or a blurry 2560x1440 equivalent.
The 2560x1440 equivalent looks tack sharp on macOS. It renders at 5120x2880 and scales it down to native, as I said it’s effectively supersampling. I used this for years and never experienced a blurry image. I now run a 5k2k monitor, also at a fractional scale and again it looks excellent.
Had the same reaction as soon as I found out pyATS is a cisco-specific thing. I run very simple networks for events on shoestring hardware/budgets and built a simple wrapper around my own object model using python, jinja and napalm to deploy cisco switches via SSH. Has terraform-like semantics (plan/apply) and lets me be productive and eliminate config drift. Napalm does all of the heavy lifting, it is fantastic. I will probably be integrating it with netbox soon.
More like it gets rid of band-aids