I think drinking too much is why we are having this problem in the first place, so maybe people should just stop it. You'd end up with Maßßen and Maszen eventually. Ich geh dann mal zum Arz, mir jezt ein attezt holen. Da wird man doch bekloppt.
I maintain that ss is a legitimate substitute in standard German orthography, just as ae for ä etc. not only because I can't be arsed to switch key layouts, but because ß is a useless character. If you compare Fuß, Ruß and Mus; Muss and Bus, Plus; Museeum and Muster. It is derived from a ligature, which is not any less confusing between ss and sz. So Buße implies it was once written Busse (or Busze). Add to that the phonetic difference between jetzt, jezt, and Arzt, is rather ridiculous, I mean, what do we have the c for? We rather standardize an extra letter and use c chiefly in ch and sch, but why? Ich habe da so meine Cweifel, ehh Tsweivel, eh ... ach lassen wir dass.
That does very much sound like magic. The wave function is in principle just describing the probability of measuring a value, if I recall correctly. What you make of it is but an attractive theory to interpret that wavefunction after the fact. And it's not particularly satisfying. It's completely meaningless that it would have been in all places at once, if in fact, when you look, it's only ever in one place.
“When you look” is the key phrase there. When you measure you’ve perturbed the system and a superposition of probabilities collapses to a single possibility. You can try to ignore the problem, but it keeps cropping up. Take for example the Delayed Choice Quantum Eraser:
I am not going to even click that. "superposition of probabilities" is not a real thing, merely a mathematical tool. The definition of probability still involves something that is going to happen. Hey that's amazing, it's the probe-ability.
edit: It's very hard for people to admit we really don't know, is what I am trying to say. I mean that's the real definition of chance, we don't know for certain. Whereas, if there is more to the field theories, more than ether, I'd really like to know, but I'm not holding my breath. Between measurement uncertainty and observer uncertainty, the models will remain just that.
Well you’re allowed to believe whatever you want of course, however divorced from observation, experiment and theory it happens to be. I would just caution you against drawing such strong conclusions about a field you seem to know very little about based on what you think should be, or your intuition.
Well, now I read the paper and, while I do find it interesting but don't follow the formulas, I wonder what you are trying to show me.
Removing the Beamsplitter (BSA) would only remove the ability to correlate the measurements, so how do you know that there are actually no instances of Gaussian distribution happening already at the origin, which would cause the fork at the splitter (instead of information traveling back in time, for example)?
> observation, experiment and theory
we agree on the observation and theory parts. The predictive power of the theory for experiments is duly noted, but here the object under scrutiny is way bigger than a single atom. Whereas the science around the materials used, crystallography to begin with, is way above my grade.
The language in the paper caused me a bit of trouble: "It is easy to see ...", "at the same time", "a quantum".
While I completely agree with the sentiment, I still wonder what's the difference to graphics. Is drawing easier, for lack of a better word, than gfx-programming? I would argue this comparison is apt and yours falls flat. B flat.
For starters, you could have a skeleton of a script with accessible parameters, given knobs. That would look like a DAW, except for text instead of pseudo design with screws and LCDs that mimic real objects (skeumorphic). Yes, you want buttons, visual programming still sucks. Demo coders like Farbrausch program their own demo tools, eg. Werkkzeug 3, for exactly that reason, isn't it? Considering gfx programming as the comparison, of course textures, models and so on are modeled in an analogue fashion. Nobody programs a human.md3 to evolve from an embryo for fun, but in principle, somewhen it could be done. Music is a lot like vector graphic art, you can do a whole lot with simple shapes and gradients. And you can program complicated sound effects perhaps easier than as a 5 second loop rendered to wav and pitched by the DAW, if you know what I mean.
Note composition as you remark is especially besides the point. The drone noise perspective might be an extremely misleading example, but music programming should be able to paint outside the classical frame. It should allow to define sweet points of resonance, instead of chasing harmony by ear. This does require deep understanding, so instead I'm happy with finger painting ... because it's so close to the metal, err, paper.
It's very sad because I have no idea of the potential. Composition to me is choosing an instrument and arbitrating simple known melodies to complexer ones until it sounds harmonious thanks to obeying the circle of fifths, but that's mostly it and mostly rather superficial, which doesn't matter as long as the instruments sounds niceand if it doesn't I'll split the melody by octaves e.g. and choose two different instruments, alter the octaves to get a high contrast (shout out to my man). Because of the loop nature of pattern based composition, I am mostly not interested in arrangement. This again compares to shader programming. And even big studios basically just stitch together single scenes. ... yadda yadda yadda.
You might also compare the violin to the voice. Far more people can or think they could sing. Making the violin sing is just much more complicated, but not exactly boring.
I'm not buying this. "eke" as "also" may give the sense "by-name", alright. What irks me is the lack of explanation of the rebracketing, which would appear like a mistake. But "nick of time" and the like would make a reanalysis as "short name" plausible, so not a mistake but a funny play of words.
No, I don't believe this. Rather, as you note inhibition, you rationalize an excuse before hand, here that you have no control over your actions, and as that has so far always worked out, as far as immediate gratification is concerned whereas detriment is harder to grasp, the inhibition is inhibited. The mind is complex and for every prohibitive experience you have an inhibition to find an excuse to justify your actions. Of course, in habitual actions these processes are pretty deeply ingrained, quick, and hence not very conscious compared to much more complex problems that might even compete for attention. Still though, the rationalization of what was done can only come after wards. That is correct.
Well my source is a Linguistics Textbook (Language Files, 11th Edition, p15-16) but it says that these changes happened as late as the 17th and 18th centuries because Scholars considered written Latin to be the ideal language (likely because most historical works had been translated to Latin at some point). So even though ending sentences in prepositions had been common for centuries in spoken English, that became frowned upon because it was not allowed in Latin.
Specific examples in the book of "rules" applied to English to match written Latin include:
- Don't ending sentences in prepositions
- Don't split infinitives
- Don't use double negatives
The chapter as a whole was actually about linguistic prescriptivism, but the Latin examples are pretty interesting nonetheless. 17th and 18th century seem pretty far past the point at which French would be the big influencer, but there's no doubt French did influence English as well (though more vocab it seems).