No matter how many new languages we invent, this problem of surprise sub-languages being parsed from strings never goes away. It's almost as if program code should be somehow impossible to confuse with user-visible text. Maybe there's no good solution though :(
The appropriate constraint is well known and exists in several languages. The magic format strings mustn't be variables. In Swift there's a type StaticString which matches literals like "This is some text" but is not used for any interpolated strings, or other arbitrary variable values.
Of course just because the constraint is known doesn't mean (as you illustrate) that everybody knows about it or makes use of it in their own programming.
This isn't an "appropriate constraint", because dynamically constructing format strings is a thing that normal programs can and do do. StaticString exists due to various limitations on the os_log APIs and it's somewhat amusing that you're trying to spin it into some sort of "known" fact that everyone should follow.
In almost all cases this feature is a foot gun, reminiscent of eval() and similar cases and so should not be provided. Sure, it's easy and flexible, but in exchange it's horribly unsafe.
Imagine that, expecting a RESTful API for some new remote service you're accessing you discover it instead offers precise instructions for where the Javascript buttons are rendered on a 640x480 Internet Explorer 6 browser, instructing you to automate pressing the buttons through a browser automation gadget.
You'd be aghast right? Sure this would work but it's clearly not an ergonomic way to provide automation, why not just offer a RESTful API like most similar sites?
Dynamically constructing format strings is the same ergonomic awkwardness, for the benefit of lazy workers who couldn't be bothered to actually deliver what was needed.
Instead I want a way to reach into the same formatting infrastructure that is used for my string literal formats, and manipulate it dynamically, not by creating "format strings" dynamically.
As an example, if your formatting library can format("{some}{values}{with:parameters}", v1, v2, v3) I ought to be able to write my own different function myfn("[different][syntax][same#features]", v1, v2, v3) re-using the infrastructure from the existing formatting library, rather than needing to write a function myfn("[different][syntax][same#features]") which has a result "{some}{values}{with:parameters}" in order to deliver the same effect but via this sub-language.
Horse riding is a fun example of your observation. Its above 30 to 1 women versus male riders. Untill you enter the higher level of competitions, then its the reverse.
I've done horse riding, its unique and fun but definitely not for me.
Though it does seem obvious, it's pretty dangerous to go looking for patterns in the data. More scientific to define the criteria before knowing the results so you can have a hypothesis to test, not just an observation to make.
The danger is that you will come to a conclusion that is objectively wrong.
P-hacking is a known issue in many scientific fields. So is drawing conclusions from over collection of data without repetition of the study... As the number of data points you collect approaches infinity, the chance of finding at least one meaningful-seeming correlation approaches 100% because having an unlimited number of data points to bash against each other makes you more likely to observe an improbable correlation.
No, looking at the data is observation or analysis by definition...whether or not you make a prediction about it. Experimentation is when you make a hypothesis first (after observation) and then test it. Looking at data is not a test--it's either observation or analysis.
From TFA: "the people vs things dimension is a continuous scale, but in our analyses we only used categories that are predominantly one or the other. All things-oriented jobs have a clear technical component, ranging from locomotive engine driver to astronomer and all people-oriented jobs have a clear component of providing help to individuals."
It also identified "face to face interactions" as a requirement for being people-oriented. So dating app developer is clearly not included.
Eugenics involved horrific deprivations of human dignity for arbitrary decisions of what seemed at the time to be “self evidently good” traits that are much less self evidently good now… if the traits were even retained.
Is that really why eugenics is demonized? That the human race (not any living individuals) is giving up its freedom to some sort of fallible authority?
I guess, like censorship, people only object to an arbitrary subset of it. People love censorship of "really bad" ideas just as they love eugenics of "really bad" genes.
Most people seem happy with asserting our authority of gene selection over animals though :P Conservationism is that. Not to mention actual selective breeding and killing of course. Perhaps we rightly believe humans truly are capable of being benevolent arbiters of who gets to reproduce and who doesn't in animals.
Animal breeding is somewhat different from human breeding.
With animal breeding, there's almost always a set goal. Getting a fatter, tastier, more hardy, or in the extreme, aesthetically pleasing animal.
I doubt we'd really use eugenics for any of them. However, a pure eugenicist might say "Well, why not? Why not select for people less prone to cancer, more physically fit, with better immune systems?"
So then it comes into the question of what eugenics has historically been used for. The worst example would probably be the sterilization of gay people. We used it against mental illness in an age where lobotomy was considered a good treatment for mental illnesses such as hyperactivity.
And all of this, of course, sort of belays the fact that for humans there seems to be no reason why we couldn't use gene therapy in place of breeding programs. Why do we need eugenics when we can directly target the aspects we want in the future generation (and quite a few people would opt in for those changes).
We already seem some of this just in genetic testing of embryos and fetuses. When people know a fetus has downs syndrome, they get abortions and try again. I could see the same with a whole host of chronic illnesses.
People select their own mates. On the assumption that you aren’t just advocating for stuff that’s always happening, you presumably want to set up some authority for who people mate with other than the couples involved. Why can’t they decide for themselves?
It really is pretty similar to the kinds of things that are used to advocate against E2E encryption. An attempt to have the state encroach on territory that used to be private.
Seems like he wasn't a "lowly country rector" since he had a masters degree from Cambridge, was a member of the Royal Society and had many other scientific accomplishments. The rectorship was apparently just a perk that came with being a successful scientist.
I wish bloggers wouldn't try to create a false impression of things like that. It's disinformation - intended to mislead the reader.
Sure, there's balance. The idea that it was working fine so why change it leads to gigantic bloated software that nobody can really understand.
As an anecdote. I developed a product in a field where the main players were extremely well established with codebases dating back to punch card days. Their software could do everything you might imagine wanting it to do. But it was also so complex, you had to take a training course to learn it. But what's interesting is there were customers of those competitors who also bought my product because it was quicker and easier to get common things done. All that power actually let to a worse product in some ways.
Except new users are coming from other platforms that have a "which", and also new new users have to learn the commands and it helps if they have intuitive terminology like "which" instead of opaque names like "command -v" which at a glance I'd assume fetches version info of the given command.
It's becoming like USB version names. If you're familiar with them from working with them all regularly, you can just remember what name means what. But for newcomers, it's a massive struggle. I feel that naming things in general imposes a lot of cognitive burden on people. Chrome version numbers are beautiful. Just incrementing integers, the way we numbered things as kids.
I wonder if the naming mess is intentional for marketing or an accident of not predicting the future well enough? They certainly seem to be trying to keep it simple and it's nowhere near the impossible muddle of, say, Intel and AMD CPUs or Nvidia GPUs yet.
Really? Can you point to any hype from the last, say, 5 years that even hints at thinly spread mining or ownership? Individuals were mining on their PC in its first couple of years, but that ended long ago.
The hype is all about how it’s distributed and nobody controls it, when in practice it’s not much different from conventional money except that those with power are even more hidden.
Its distributedness is still worth something even if only a small number of people control most of it. If they go rogue or get stopped by the government, others can take over. That's quite unlike any centralized system.