To be completely honest, it’s a bit naive to assume that congress people represent their constituents first. It is commonly known, and backed up by evidence, that they will back their financial backers first and foremost.
It is unclear who the backers in this case are. But when in doubt, follow the money
For almost all the controversial content I have seen on X, the community notes do work pretty well to fact check. It’s not perfect, but the best solution we have seen from a major social media site.
1. People don’t trust the government and are wary of giving it more power
2. You’re missing that your face is different than a piece of paper. You can choose to refuse to show ID in some cases. You could keep your face covered, but that has ramifications you might not desire
3. The Feds are worse because they are already much more powerful than any state government.
It’s really not hard to see where the fear is. This might be one of the most obtuse comments I have seen on here
1. Sure, folks don't trust the government. I'm gonna guess that same sort wants to see ID in some situations. In this case, the government already has the power. State ID's already exist, and the federal government can get access. You already have a ss# - if someone wanted to mess with you, they'd just code you as dead. "accidentally".
2. Sure, your face isn't a piece of paper. It is still a major identifier, hence the reason you have it on your ID. The DMV already has your picture on file (hence, you don't always have to have a new picture). I'm not sure there are many situations where you could refuse to show ID to the government.
3. That's not a given, depending on the situation.
The advantage of a traditional ID card is that that you usually know when "the readout" of your ID happens. I don't know about you, but my idea of a good society does not involve states reading out the identity of their citizens without them noticing. If they have to do that they are not that good at the whole state-thing.
A good ID system has to solve two problems:
- Allow verification that the holder of the ID is the owner of the ID (identity verification)
- Allow to read out certain facts. Bonus points if this can be done granular, e.g. verify to the other side that you are older than X years without telling them when you were born, where you live and what number a state assigned to you. Extra bonus points if you can see which information is read out and can deny (or even flag) over-eagerly information requests.
Note that for the identity verification you just need to know if the biometric identifier of the person holding the ID matches the picture on the ID. You do not know when they were born, what their name is, where they live etc.
In a safe digital future this need-to-know-principle is IMO necessary to keep the power symmetry between inividuals and governments/corporations/criminals.
1. “Just because they have power let’s give them more power.” This is a bad argument
2. There certainly are situations where you can refuse. And some one needs to ask you explicitly for it right now. Won’t be true when your face is a government id
3. This is simply not true. When push comes to shove, the feds will win every time
identity “verification” by ssn is stupid; almost every other democracy has national id systems and residential verification and it works better and is easier to get benefits or prove yourself than real id and any other absurd american invention.
The first part is correct, the rest is technocratic handwaving over real concerns based on real experiences that lead to dire consequences that cannot be undone and often aren't fully known until it's too late to undo them.
I’m not commenting on the facial recognition tech. They do that without a national ID.
State and the Federal government also work together in security, especially “National Security.” That’s how police departments end up with former military vehicles.
> What do you mean by ‘well implemented contingency thinking’?
"I knew people wanted to do A. But I thought it'd also be helpful if when you did A, a feature like B could be there to prevent really common mistakes. And in this other set of features over here, if an annoying thing like C ever happened, well, I put in stop-gaps D and E so it was much less of a problem."
I think most businesses fall into this so called middle class already. Perhaps this group could be labeled The Silent Majority of businesses given how these folks get up and do their work everyday without particularly expecting to escape the grind or becoming billionaires.
I see this in my extended family where almost everyone runs this kind of business and is so for several decades. They make much more money than if they worked for someone else, but none of them are going to break the $100M mark
I think it's too much to expect staging to match the load and access patterns of your prod system.
I find staging to be very useful. In various teams I have been a part of, I have seen the following productive use cases for staging
1. Extended development environment - If you use a micro-services or serverless architecture, it becomes really useful to do end-to-end tests of your code on staging. Docker helps locally, but unless you have a $4,000 laptop, the dev experience becomes very poor.
2. User acceptance testing - Generally performed by QAs, PMs or some other businessy folks. This becomes very important for teams that serve a small number of customer who write big checks.
3. Legacy enterprise teams - Very large corporations in which software does not drive revenue directly, but high quality software drives a competitive advantage. Insurance companies are an example. These folks have a much lower tolerance for shipping software that doesn't work exactly right for customers.
> I think it's too much to expect staging to match the load and access patterns of your prod system.
For a lot of things, this makes staging useless, or worse. When production falls over, but it worked in staging, then staging gave unwarranted confidence. When you push to production without staging, you know there's danger.
That said, for changes that don't affect stability (which can sometimes be hard to tell), staging can be useful. And I don't disagree with a staging environment for your usecases.
> For a lot of things, this makes staging useless, or worse.
That depends on what Staging is used for, if its used to run e2e tests, giving a demo to PMs etc, you can use Staging. For performance testing you can setup a similar env like Prod, run your perf tests and then kill the perf env or you can scale up the staging env, dont let anyone use it except for performance and then scale it down.
It’s crazy sometimes how big of a difference it is. One recent example - I had to build a custom Docker image of some OSS project. Not even a huge one - only what I would call small-mid size. Just clone the repo and run the makefile, super simple. It took 35 minutes to build on my 2020 Mac Mini (Intel) and would have been probably half that if I had the most recent machine.
Why would I build on a local machine vs running the build on a server in a datacenter? Per your own arguments, server grade hardware is going to compile much faster than any local workstation.
Ah, good old "compiling" [0]. When a worker needs a $4000 machine to actually do his work then it's unavoidable. The slow machine? $2000 out to be enough™ for everyone else.
when I worked for big corp, the reason we were told in engineering for getting $1,000 laptops was that it wasn't fair to accounting, HR, etc for us to have better machines. In the past people from these departments complained quite a bit.
The official reason (which was BS) was "to simplify IT's job by only having to support one model"
Who cares what is "fair"? A decision like that should be based on an elementary productivity calculation. If not the inmates have taken over the asylum.
I think we can establish that the database is the biggest culprit in making this difficult.
As an independent developer, I have seen several teams that either back sync the prod db into the staging db OR capture known edge cases through diligent use of fixtures.
I am not trying to counter your point necessarily, but just trying to understand your POV. Very possible that, in my limited experience, I haven't come across all the problems around this domain.
The variety of requests and load in prod never matches production along with all the messiness and jitter you get from requests coming from across the planet and not just from your own LAN. And you'll probably never build it out to the same scale as production and have half your capex dedicated to it, so you'll miss issues which depend on your own internal scaling factors.
There's a certain amount of "best practices" effort you can go through in order to make your preprod environments sufficiently prod like but scaled down, with real data in their databases, running all the correct services, you can have a load testing environment where you hit one front end with a replay of real load taking from prod logs to look for perf regressions, etc. But ultimately time is better spent using feature flags and one box tests in prod rather than going down the rabbit hole of trying to simulate packet-level network failures in your preprod environment to try to make it look as prodlike as possible (although if you're writing your own distributed database you should probably be doing that kind of fault injection, but then you probably work somewhere FAANG scale, or you've made a potentially fatal NIH/DIY mistake).
The article doesn't talk about any of that though. The article says staging diffs prod because of:
> different hardware, configurations, and software versions
The hardware might be hard or expensive to get an exact match for in staging (but also, your stack shouldn't be hyper fragile to hardware changes). The latter two are totally solvable problems
With modern cloud computing and containerization, it feels like it has never been easier to get this right. Start up exactly the same container/config you use for production on the same cloud service. It should run acceptably similar to the real thing. Real problem is the lack of users/usage.
I was responding to other commentors not really the title article.
The stuff you cite there is pretty simple to deal with, configuration management is basically a solved problem and IDK how you can't just fix the different hardware.
The more universal problem of making preprod look just like prod so that you have 100% confidence in a rollout without any of the testing-in-prod patterns (feature flags, smoke tests, odd/even rollouts, etc) is not very solvable though.
A lot of things seem like they shouldn’t be, until you’ve debugged a weird kernel bug or driver issue that causes the kind of one-off flakiness that becomes a huge issue at scale.
IME, when you are not webscale, the issues you will miss from not testing in staging are bigger than the other way round. But that doesn't mean that all the extra efforts you have to put in the "test in prod only" scenario should not be put even when you do have a staging env.