In reality, there are only things that people are working on, and things that no one is working on. There's no need for a third category of "things that are in the sprint" as long as someone (anyone!) keeps the task queue sorted in priority order.
Sprints are meant to be a tool to measure a team's velocity, defined as average amount of work throughput in X weeks. You don't know the average until a sprint team has worked together for several sprints on similar work.
Sprints can work very well if you need to:
A) Estimate the time it will take to complete a large project
B) Compare productivity of different teams so you can shift or add people
C) Get continual feedback on the work quality and the workflow process
Over time, I've seen sprint teams become very accurate at estimating their work throughput ahead of time. This can be extremely valuable to the business delivering on its goals on time and committing to the right amount of deliverables.
Sprints are less about categorizing the incoming work, but partitioning the outgoing work, to establish a quick feedback loop. This should in theory ensure that risks are tackled early.
The outgoing product is nevertheless fully determined by the incoming work being done. It would make more sense to group a set of tasks into a deliverable than a time period. You have to do that anyway to deliver anything coherent.
There are two arguments against grouping as a deliverable:
1) sprints are (supposed to be) predictable. So you know when to expect something and can plan other activities around that date.
2) (the more important reason IMHO) a sprint enforces early delivery, where features are rough and not very comfortable to use, but it’s already visible how it will look like. That means feedback can be collected and fueled into the next planning. The assumption is that it’s easier for people to talk about hidden assumptions when they interact with the software, than when they write specs
Moreover it might not just be developer time vs. latency that's being traded off.
Maintaining a stateful websocket connection on the server side isn't cost-free, and that connection would be idle nearly 100% of the time. The bandwidth consumed via Google's polling solution might well be cheaper than open socket file descriptors of a websockets solution.
> There is no good way for a person to identify another person without first mutually agreeing on Brand identities.
How is this absence not a good thing? If someone wants to be identified, they have to go through the trouble of creating an identity. In fact, it would be preferable to also not have a permanent or consistent personal identity with respect to brands either.
No doubt, but I think the point is to ask the right questions. Its not so much about preventing the inept institutions from maintaining their influence so much its about making sure that competing institutions don't have arbitrary barriers to entry that prevent them from competing in the market, even if initially, people are skeptical of them in comparison to "established" players.
It's a great, succinct post. Deeply uncool. A programmer should be modest about their skills, skeptical about new-new things, eschew bullshit, and terrified of dependencies. I buy the whole thing.
Moreover, I trust the advice of someone who rates themselves poorly more than someone proclaiming that they're a hotshot.
While I agree that in many cases JSON parsing is not the largest consumer of resources, it really sucks when it is.
At some point it seems like a general mindset shifted from making things efficient at every level to assuming things don’t matter if you’re probably doing something worse anyway.
At risk of embarrassing my self statistically, what exactly happens when you do this?
I.e., if you're controlling for country, that means you're bucketing by country, and looking at each subset, right? So if country is represented by a non-discrete value... what exactly happens?
So let's pretend there's three types of trees we want to study: Oak, Maple and Aspen, which we code as 0, 1, and 2 for reasons (there are some good reasons to do this).
Statistically, if you treat them as a continuous variable, the estimates you get will act like there's an ordering there, and give you the effect of a one unit increase in tree. So it will tell you the effect of Oak vs. Maple and Maple vs. Aspen, assuming those are proportional and that Oak vs. Aspen will be twice that.
This is...nonsense, for most categorical variables. They don't have a nice, ordinal stepping like that.
In practice, if you have n countries, you'll add n-1 binary variables to your regression equation. The first country is the reference level (all zeros), for the second country set the first new variable to one, the rest to zero, etc.
> However, if the top-level politicians, who remain technically inept, are the ones giving the orders, confusion will remain.
The converse would also be true though; a technologically adept leadership that didn't understand society's principles would also lead to confusion, likely worse confusion.
Deeper still, a militarily adept leadership, or any other narrowly adept leadership, that didn't understand society's principles would also lead to confusion.
Politicians have to deal with society as a whole, and as long as no one can master every area of expertise, they need to have someone doing analyses on their behalf.
In reality, there are only things that people are working on, and things that no one is working on. There's no need for a third category of "things that are in the sprint" as long as someone (anyone!) keeps the task queue sorted in priority order.