That's not correct; you cannot use a double-slit test to check for entanglement. Running a photon through a double-slit setup always just produces a single dot, not a any sort of pattern. To get a pattern, you need to run a bunch of photons through it and see if a fringe pattern appears [1].
(BTW, you never get a two-line pattern in a decent setup. This is an incredibly common mistake, but it's simply wrong. The interference (which produces fringes) only happens where the separate patterns from the two slits overlap, so if you want a lot of interference, you need them to overlap a lot. So in the no-interference case, you won't get two separate lines with a gap between, you'll get a single merged wash (with probably some fine structure due to diffraction within each of the slits, but that'll also be there when there is interference, on top of the two-slit interference fringes).)
You might think "ok, I'll do this with a bunch of photons, measure/not measure all of their twins, and see if the bunch of them show fringes." This is more-or-less what's done in the delayed-choice quantum eraser experiment, but it doesn't work out in a way that allows communication. What happens is that you always get the no-interference pattern. In order to see interference fringes, you need to split the individual photons' dots up based on the result of the measurement you made on their twins. Based on those measurements (if you made them), you can split the photons up into two groups, which'll have fringes with equal-and-opposite patterns (i.e. each will have bands where the other has gaps [2]).
If you didn't measure the twin photons (or made some other measurement on them instead), you can't split them up, so you won't see the fringes. But that's not because the measurements were different, it's just that you can't split them up afterward to see the fringes. And even if you did measure the twins, you can't split them up until you get a list of which twin got which result -- which can't be sent faster-than-light.
Net result: no, you can't send information via entanglement, you can only get correlation.
Brackets are used in shell wildcard ("glob") expressions. For example, if you try to use "[bar]" as a command, the shell will first look for files named "b", "a", and "r" in the current directory, and if it finds any it'll use the first one as the command name and any others as arguments to it.
But as far as I can see, using a close-bracket as the first character in a command is safe, since it cannot be treated as part of such a pattern. Open-bracket (without a matching close-bracket) would work in many shells, but will get you a "bad pattern" error in zsh.
True, but since everyone in the study -- both those with and without diagnosed COVID-19 infections -- had been subject to this, it shouldn't affect the results. Essentially, they're comparing people who were just trapped indoors vs those who were trapped indoors and also had diagnosed COVID-19 infections (and they also broke the infected group down by severity of infection, what variant was prevalent when they were infected, etc).
Photons are the means for transfer for all energy sources at some level, so using that as a comparison for the actual generation method seems a little pointless.
echo -n is not safe, because some versions of echo will just print "-n" as part of their output (and add a newline at the end, as usual). In fact, XSI-compliant implementations are required to do this (and the same for anything else you try to pass as an option to echo). According to the POSIX standard[1], "If the first operand is -n, or if any of the operands contain a <backslash> character, the results are implementation-defined."
Thanks - I wasn't aware that echo was that problematic as I target bash (usually v4 and above) from my scripts.
I just tested it out with:
sh /bin/echo -n "test"
/bin/echo: 3: Syntax error: "(" unexpected
I didn't realise until recently that printf can also replace a lot of uses of the "date" command which is helpful with logging as it avoids calling an external command for every line logged.
I (vaguely) remember playing games with terminal echoback on physical terminals back in the early-mid 1980s when I was in college. This was on a VAX/VMS system.
Someone (I don't remember who did what here) discovered that they could get `SHOW SYSTEM` (roughly analogous to unix `ps` command) to display their name in reverse video by adding escape sequences to their process name. So a bunch of us started experimenting to see what else we could embed in there.
Most of the terminals attached to the VAX were Zenith Z-19s, which mostly emulated DEC VT-52s but with some added features. One of those added features was an enablable 25th line (in addition to the regular 24x80 display) that functioned as a sort of status line. We found we could enable that, write something into it, then use the "transmit 25th line" escape sequence to send its contents back to the VAX. I remember having to work around limitations like it sending an escape sequence before the 25th line (which confused VMS), and I think it didn't send a carriage return at the end... or something like that.
I don't think we ever got it to do anything terribly interesting, but it was fun to play with. And then IIRC a VMS update blocked control characters in the `SHOW SYSTEM` listing.
That's what I do. I have one account ("Apple ID") for iCloud, and a separate one for music and App Store purchases.
A couple of caveats, though: Apple encourages using the same account for everything, and their interfaces try to autopilot you into that setup. You have to pay attention, and find & choose the "I'll set it up myself" options. Also, Apple uses email addresses as the name/identifier for Apple IDs, so to set up multiple IDs, you need multiple email addresses. iCloud includes an optional email account, do it's easy to use that for the iCloud account yourself and your personal email address for the other.
Which reminds me: don't tie your personal stuff (iCloud, purchases, whatever) to an Apple ID under your company email address. If it's stuff you should keep after leaving your current job, it should be under an Apple ID that's tied to an email address you'll still have after leaving the job. On the other hand, for things that're part of the job (e.g. apps purchased by the company for the job), it should be under an Apple ID "owned by" the company and tied to a company-controlled email address.
I find that the `pbpaste | something | pbcopy` idiom is common enough that it's worth having a shell function for it:
pbfilter() {
if [ $# -gt 0 ]; then
pbpaste | "$@" | pbcopy
else
pbpaste | pbcopy
fi
}
Then you can use something like `pbfilter json_pp` or `pbfilter base64 -d` or `pbfilter sed 's/this/that/'` or whatever.
This version also can also act as a plain-text-only filter. If you just use `pbfilter` with no argument, it'll remove any formatting from the text in the pasteboard, leaving just straight plain text.
It does have a some limitations, though: you can't use it with an alias, or pipeline, or anything complex like that. The filter command must be a single regular command (or function) and its arguments.
It usually doesn't matter much, but there are some situations where it can matter a lot. For one thing, you can't use seek() on a pipe, so e.g. `cat bigfile | tail` has to read through the entire file to find the end, but `tail bigfile` will read the file backward from the end, completely skipping the irrelevant beginning and middle. With `pv bigfile | whatever`, pv (which is basically a pipeline progress indicator) can tell how big file is and tell you how for through you are as a percentage; with `cat bigfile | pv | whatever`, it has no idea (unless you add a flag to tell it). Also, `cat bigfile | head` will end up killing cat with a SIGPIPE signal after head exits; if you're using something like "Unofficial bash strict mode" [1], this will cause your script to exit prematurely.
Another sometimes-important difference is that if there are multiple input files, `somecommand file1 file2 file3` can tell what data is coming from which file; with `cat file1 file2 file3 | somecommand` they're all mashed together, and the program has no idea what's coming from where.
In general, though, I think it's mostly a matter of people's expertise level in using the shell. If you're a beginner, it makes sense to learn one very general way to do things (`cat |`), and use it everywhere. But as you gain expertise, you learn other ways of doing it, and will choose the best method for each specific situation. While `cat |` is usually an ok method to read from a file, it's almost never the best method, so expert shell users will almost never use it.
reply