Instead of counterfeiting, and operating beyond the boundaries of well understood legalities, while deceiving users, and engaging in sketchy malicious software practices, what if they were more well behaved?
What if they didn't create an obvious attempt to create a direct knock-off of the latest iPhone? What if they created a comparable device to stand on its own? What if they clearly labeled everything that the phone was really doing? What if they weren't scraping every interaction he user has with the device?
What if they took all of that deceptive effort, and poured it into producing a device that could be trusted, and took all of those efforts to deceive, and instead poured that creativity into improving the fundamental device they wished to create?
What if they made something they could put their names on, without inviting all the consequences that their dubious behavior would surely result in, if we knew who was behind this sort of thing?
What if they could admit to what they were doing, because it sought to benefit their patrons, instead of posing obvious risks to anybody spending $100 on their stuff?
Legalities may be dependent on the country of origin and honestly, most of the engineers may not even be aware that they're deceiving anyone, just trying to copy a device at low cost.
Most businesses do something that's identical to work done elsewhere. Many, many, many companies exist whose main line of business is to take a successful product and clone it cheaply for a market that can't afford the "authentic, professional" version. That is a perfectly normal, legitimate business.
I'm willing to bet that most of the engineers who did this phone never thought they were doing anything wrong. In fact, the "information should be free" arguments that many posters use on this site could be used to justify exactly the production of this phone!
IP protection gets interesting when it protects something you actually care about...
What if they had access to the IP that is currently locked away by the likes of Apple/Google?
What if anyone who wanted to replicate a good design (and improve upon it) were able to do so without restrictions, or fear of lawsuits?
Conversely, what if what we treat as the sacrosanct right to make profits over a one-time invention (as if noone else could ever come up with a similar idea on their own - while history has repeatedly shown otherwise) was applied throughout history? Would even railways / electricity / simple things we take for granted now - have been so widespread? Maybe we would be flying in counterfeit airplanes.
>What if they had access to the IP that is currently locked away by the likes of Apple/Google? What if anyone who wanted to replicate a good design (and improve upon it) were able to do so without restrictions, or fear of lawsuits?
Then nobody would invest any money in releasing stuff and we would still live in the stone age.
Here's an interesting article about 19th century Germany when no copyright laws existed yet:
The Real Reason for Germany's Industrial Expansion?
Did Germany experience rapid industrial expansion in the 19th century due to an absence of copyright law? A German historian argues that the massive proliferation of books, and thus knowledge, laid the foundation for the country's industrial might.
I think that would mean only a dyson hemisphere, since Earth is illumnated by the entire hemisphere of the sun that faces us, due to the sun's sheer size.
To see the kinds of effects this would have on the gas giants would be fascinating. It would probably drastically alter the coriolis banding and spots on Jupiter and Saturn.
Venus would probably be the only other planet affected, and perhaps comet behavior might change. In fact, I wonder if planets would become so cold, that they might produce visible comas as they warm up, when they enter the sun's (now hemispherical) light cone. That's probably the most prominent effect I can see happening.
But does it have to totally block it out? What if we left space in between units so that it just dimmed the sun behind it. Hell, maybe it will help reduce global warming so we can just keep on keepin' on. No need to reduce output levels of anything other than the amount of sun we receive.
Plus, it would be fun to screw with societies in other parts of the galaxy that are using their Kepler satellite equivalents. Every time the Earth transits, the light would dip in a way that would lead the observers to think Earth was much larger, thereby much less massive (for the math to work). Great defensive strategy as they would not think our planet was worth pursuing.
The basic principle of the technique was
proposed almost 50 years ago by the physicist
Walter Hoppe, who reasoned that there should
be enough information in the diffraction data
to work backwards to produce an image of the
diffracting object.
This kind of statement just absolutely cracks me up, because it's a clear reveal that between this sort of awareness of diffraction principles, and concepts like pilot wave theory, that double slit experiments and entanglement haven't been mysterious for decades.
It's all just media manipulation. There are very firmly understood concepts backing all the mechanics of quantum effects, and the journalists that push the ambiguities are simply trolling would-be amateurs for to fan the flames of confusion as a sort of outsider performance art.
There is enough information, should one be able to retain the phase; getting it from intensity is much more challenging.
I'm familiar with the work of one of the authors; he is a world expert on diffraction inverse problems in physical context. From a quick skim of the paper, it would appear that they're simply being careful and clever.
I'm not sure I completely understand your comment. Are you suggesting this is not a significant advance in the field?
I am not an EM expert; but the fact that its been through peer review & gotten published in Nature seems to suggest scientists in the field think it is a significant advance. And from what I do know of microscopy, it is still not trivial to image (sub?) atomic size structures.
Extending the quoted paragraph:
"The basic principle of the technique was proposed almost 50 years ago by the physicist Walter Hoppe, who reasoned that there should be enough information in the diffraction data to work backwards to produce an image of the diffracting object.
However, it was many years before computer algorithms were developed that could do this reverse calculation easily and reliably. The pictures produced by ptychographic methods are generated using a computer from a vast amount of indirect scattering data."
The news & views article does not claim that this is a basic sciences advance; they are claiming its an engineering / methodology / procedural advance. And those are as important, IMO.
Unless you are suggesting that, once the basic sciences are known, any engineering advance is trivial. If so, then you & I have very a different impression of how easy / difficult it is to build new "things" :)
None of the things you've mentioned are anything close to what I'm bringing up.
The point I'm making is that popular discussion of quantum effects are so wildly off-base, and have muddied the waters of even trying to understand what happens between photons and electrons, by casually reading about it.
But you see something like this emerge, and it's really obvious that solutions to these problems were on the right track even as far back as the early 1900's, only to be derailed by academics emerging in the 1940's.
So, was there an ulterior motive to all the complex obfuscation of math, and inaccurate scientific reporting throughout the later 20th century? Or has it all been one big, innocent misunderstanding, among aloof egg heads distracted by their gigantic precious particle colliders?
This is a massive conflation (and misunderstanding) of quantum mechanics. The processes being discussed in this paper are statistical and large, and thus pretty highly classical in nature.
The double-slit experiment, entanglement etc. are all concerned with what happens with individual particles to produce those statistics, and what that means.
For example, it's not remotely surprising defracted light can reconstruct an image of an object (this idea has been around for a while - i.e. evanescent wave fluoresence, or the quest for a negative refractive index material which would be beat out diffraction limits handling). But that's not why it's not surprising - it's not surprising because you can also detect the existence of solid objects without actually touching them with so much as a photon purely by letting the probability field of one potentially extend through them, and then observing whether you see diffraction patterns along the path where photons do travel (basically a highly biased double-slit experiment reveals whether it would've interfered well before a particle is ever likely to have hit the object obstructing one of the beam paths).
The point being that computation is so cheap now, that it's more difficult to promote confusion and obscure facts.
It used to be expensive to compile massive data sets and reduce them to reliable statistical evidence, so it was easy to push concepts that had little supporting evidence. For example: "the particle passes through both slits", "the cat is alive and dead", "there are no hidden variables", "the source of an emission never ascribes state to its particles, and that state does not exist until inspected"
Now, such wild claims are in disagreement with rivers of data that are much more easily produced and reviewed computationally. Observations that were not previously possible now shed light on facts that were previously obscured. Without backing data ideas prone to confusion could take root. Particularly so, with voices of academic authority shouting down concepts that threaten the ivory tower.
But now, technology to conduct measurements is cheaper, and data shouts louder. So, something presented as fact in A Brief History of Time (the particle passes through both slits) can no longer be supported by fame alone, simply because the author is revered. It's easier produce and publish data (make high-fidelity video recordings of the behavior of silicone oil beads demonstrating pilot wave phenomena, and post on youtube).
On this example, pulling together raw data from sensor streams, and dumping into a high performance computing pipeline, reveals that diffraction itself is a state producing phenomenon, and that reliable variables are produced by the diffractor, but would be later hidden by subsequent polarizers that drive downstream state. If the hidden variables weren't reliable, there would be no possibility of composing an image from the statistical analysis of the diffraction. The diffraction would produce no reliable signal to reconstruct, because it would not exist, since local hidden variables are forbidden, behind no-go boundaries.
Actually, I don't think it's stupidity... it's greed. Telling stories that sound good is unrelated to truth, but declaring them science or news (when you don't know) is deceptive.
I'm just reading "What is Real?" by Adam Becker (a history of QM published this year) and boy do the Copenhagenists look foolish. Even Bohr comes off as a saintly buffoon.
With experimental confirmation that the universe is non-local and "spooky action at a distance" is real, the pilot wave theory "wins" and there's no measurement problem.
(It isn't journalists though, it's the physicists themselves that muddied the waters by permitting herd mentality to overwhelm science. Also, von Neumann got a proof wrong! Folks can be forgiven for not suspecting that. But once it was noticed then the "orthodoxy" should have paid attention.)
No doubt. But the facts of which the opinions are held are pretty staggering.
Von Neumann got a proof wrong in his textbook on QM. Grete Hermann found the error in 1935 and nobody noticed. De Broglie presented a "pilot wave" theory at the Fifth Solvay Conference in 1927.
Einstein kept pointing out the problem with non-locality and everybody thought he was getting old and foggy.
Physics is hella tribal. Physics.
I took the Copenhagen metaphysics pretty seriously. It's so neat and elegant to confound the mystery of quantum wave-function collapse with the mystery of subjective experience. The "observer-created Universe" and all that. It's very disturbing to realize that it's basically metaphysical bunk. It's just staggering.
But never mind all that!
The Universe is non-local!
*And yet-- Relativity!"
Nothing can go faster than the speed of light, but wave collapse does, so this is going to be some awesome physics!
I've also found focusing on the resource/noun
aspect ( ... ) to be a bit of a "nerd trap";
Oh, absolutely. Ten years ago, during one of my first projects involving a REST implementation, I was working with a person who I deemed fairly rational, and reasonable to work with.
Coding up the REST paths was a breeze, and I had implemented like 99% of perhaps a month's worth of work in under a week. I laid out some quick docs for how to interact with the URL patterns and, proud of how quickly the whole thing snapped into place, I showed it to the guy.
He immediately jumped all over me, complaining loudly that the URLs "were not RESTful!!!" Which, of course, shocked me. Was he joking? Certainly not. This was a very serious error.
Well, okay! Big deal!
So I spent 15 or 20 minutes performing search and replace, changing words from things like:
/blue/104
/read/9008
/fetch/18
To things like:
/car/104
/book/9008
/page/18
And wowee, crisis averted. I was still weeks ahead of schedule, and the commit was all of 10 minutes ruminating over preferable synonyms to please the sensibilities of humans, rather than struggle with computational correctness.
During this time, I briefly mulled over the idea of trolling him with a many-to-many look-up table, and introducing variable synonyms that include opaque MD5 hashes of my original path names, and produce the same effective resource, such as:
I don't think your pedantic coworker's criticism is unreasonable at all. APIs, HTTP or otherwise, should communicate what they actually do as best as possible. I have no idea what "blue" is supposed to mean in context, and I struggled to connect "blue" to "car."
What the article is saying, and what I think GP is saying, is that trying to model (non-CRUD) actions as "resources" in the pursuit of being "correct REST" is often a waste of time. `/nouns/8000/verb` is somewhat obvious to me (it does `verb` to `noun` 8000) despite not being "proper REST" whereas `/verb/8000` doesn't (what exactly am I `verb`'ing?), `/green/8000` would just make me scratch my head, and `/9f27410725ab8cc8854a2769c7a516b8/8000` tells me I'm wasting my time with this API and should go do something else.
As a user or developer I think being obvious and usable is more important than being "correct REST" and so I don't even use the term anymore, I just say HTTP API.
You might think it is just "pedantry" or "semantics" but consider that you are not the only one who will be using or reading these APIs, and design accordingly.
(However, if your coworker really was jumping up and screaming loudly at you because of this, this suggests he may not be as enjoyable to work with as you think)
You as an English speaker. What about the remainder of the world?
Must everything be left-to-right? What if the nouns and verbs were not English words? What if the natural order of possessives or adjectives is reversed in a romance language? What if the words are not even cognates, and the characters to express terms are not even letters, but opaque pictographs?
The paths chosen are an abstraction, and REST is designed such that it should be easy to bind any one resource to perhaps multiple URLs. But honestly, to the vast majority of the world, these URLs never see the light of day, and every consumer of a REST URL is probably going to do so programmatically, perhaps once, while reading the docs and setting up tests, and then they'll call it a day, and never look back, because honestly, consuming data from API endpoints isn't actually anyone's idea of "fun" despite wild claims to the contrary.
For the sake of getting shit done, I usually avoid this conversation in practice, but honestly, also because arguing is worthless. People have their opinions as a matter of convenience to their own efforts, and not as considerations to others.
These are the tastes of a given developer creeping in as a leaky abstraction of human readability.
If you take the "two hard problems in computer science" quote seriously -- or if you simply consider that humans are part of the system -- then naming issues probably deserve careful attention.
Then again, if humans are part of the system, it's also probably a good idea to use more reserve with coworkers than "jumping all over me" implies.
Driving doesn't fit into such a world view. Driving through a long tunnel never lasts up to 30 minutes, and tunnel lengths are never such that if your car got stranded, walking out on foot to reach the surface would challenge people with small children.
Subways are categorically a different animal. Of lesser endurance than light rail networks, the excess of amount of time spent within them is due to unintended delays and speed limits that draw out travel times. Spending more than 45 minutes on a subway trip is often viewed as unhappily inconvenient and/or an error.
> Driving through a long tunnel never lasts up to 30 minutes, and tunnel lengths are never such that if your car got stranded, walking out on foot to reach the surface would challenge people with small children.
Aside from unusual traffic delays, you are probably right about the 30 minutes thing (the longest road tunnel is just over 15 miles long), but it also has no emergency exits other than the ends, and the maximum 7.6 mile trek to an exit with both ends clear (and, a fortiori, the longer possible exit journey if not starting at the middle but a near end exit is not possible) are likely to challenge people with small children. LLP
Nowadays, several countries have GDP much greater than "£100B" and I think that was very much not the case in 1905. Anyone have the figures for 1905 at their fingertips?
Also, the "crying with laughter" face, in practical use, comes across as absurdly over-eager to express inauthentic laughter. Most people just knee-jerk flood you to hell with those things, at the slightest hint of a joke.
I'm tired of seeing that stupid, stupid cartoon face. I wince when I see it. It pains me. It's like getting stabbed in the gut for offering chewing gum to someone. I don't like to tell jokes anymore.
I want to reply:
Please stop laughing so hard.
The joke I made was not that funny.
I notice Facebook users have the worst sense of habitual knee-jerk emoji obligation. I think it's because they are restricted to expressing only five emotions.
1. Thumbs Up (like, but often mere acknowledgement)
2. Heart (love, but often off-topic)
3. Wow (horrified, shocked or amazed, but which?)
4. Cry (sad, but why?)
5. Mad (angry, but at who?)
In most cases, this leaves them resorting to an ambiguous "Wow" for most non-thumbs-up reactions, leaving the reader to question if it's a good wow, or a bad wow.
Because they've accepted ambiguity as fact when communicating with individual icons, they also lack a sense of volume. They've been trained to understand that CAPS LOCK is bad, but it will be years before emoji fatigue sets in. For this reason, I've mostly withdrawn from interacting with many people.
Except you can't tell me what the spin is without polarizers. This is like telling me I have to measure the orientation of a fidget spinner with a fitness club's treadmill set to the brisk pace of an uphill jog.
None of the experiments don't use polarizing lenses to make a determination of results on both sides. This is where the experiments are fundamentally flawed and propose weak evidence.
To simply read about the fundamentals of light polarization is to understand that quantum wave function collapse is much ado about nothing, and it becomes obvious that all this contention is total bullshit, and none of it is magic.
If you read through the list of Bell test experiments [1], you will discover that not all of them are done with photons, for example »Violation of Bell’s inequality in Josephson phase qubits« [2].
You complained - without providing any substantive arguments why this might be an issue - that all Bell test experiments use polarizers, I pointed out that you are wrong. I have no idea what you are complaining about now, what is »[...] qualities of the mediums that quantum uncertainty affects [...]« even supposed to mean? You are obviously far outside of your area of competence. If not, just do the experiment, write the paper, and collect your Nobel Prize, no need to argue with clueless people on the web.
A Josephson junction doesn't even trap any single actual physical particle. It's just a standing wave of electrical current trapped in a bounded array of geometrically crafted slabs of superconductors and insulators. It's a phenomenon that arises from the construction of the device.
Unlike fundamental particles such as photons and electrons, there is nothing substantial about the state represented by the standing wave qubit trapped in a circuit operated by a Josephson junction device. Destroy the device (or nevermind that, just never place it in a dewar flask chilled to 4 degrees kelvin, to activate it) and the phenomenon doesn't even exist. So much for whether or not matter or energy can never be created nor destroyed.
To sit there and state that, on paper, this is the same thing as an individual electron emitted as beta decay is, well... fundamentally flawed.
Just to be clear, these are experimental results that are statistically impossible assuming a classical physics. Are you actually worried that they are somehow studying the wrong self-evidently non-classical thing, or what?
"The zero voltage state describes one of the two distinct dynamic behaviors displayed by the phase particle, and corresponds to when the particle is trapped in one of the local minima in the washboard potential. [...] With the phase particle trapped in a minimum, it has zero average velocity and therefore zero average voltage. [...] The voltage state is the other dynamic behavior displayed by a Josephson junction, and corresponds to the phase particle free-running down the slope of the potential, with a non-zero average velocity and therefore non-zero voltage."
So, we're not even talking about actual fundamental subatomic particles anymore. We're talking about phase oscillations, and renaming that as if it were a "particle" because, hey, particle/wave duality, so why not?
Hand-wavey math permits us to equivocate that a current induced on a wire, by way of the transfer of many actual electrons across substrates, can serve to prove the premise of a "teleportation device" also.
See? If we play our game of three-card monte, change phase oscillations, wiggle our noses, and tilt our heads a little, it's all very obvious that faster-than-light information transfer can be generalized to fit in the same picture, because this tuning fork makes that tuning fork ring in harmony, but only when we choose to notice.
I’m not sure I follow your argument, but if I understand you right, I don’t believe what you’re quoting is relevant. From the article you are (I think?) criticizing:
We measure a Bell signal S of 2.0732 ± 0.0003, exceeding the maximum value |S| = 2 for a classical system by 244 standard deviations. In the experiment, we deterministically generate the entangled state, and measure both qubits in a single-shot manner, closing the “detection loophole”[11]. Since the Bell inequality was designed to test for non-classical behavior without assuming the applicability of quantum mechanics to the system in question, this experiment provides further strong evidence that a macroscopic electrical circuit is really a quantum system [7].https://web.physics.ucsb.edu/~martinisgroup/papers/Ansmann20...
That says in plain English that they have not assumed that this system behaves according to quantum principles. In fact, it is precisely the opposite: the quantum nature of this system is a conclusion of their results. It would be statistically impossible for any system following classical rules to produce the same data.
(It bears repeating that the math underlying that conclusion is truly not very complex, and it is very, very well studied. If you can show that it’s flawed somehow, don’t bother publishing— just post your proof here and I’ll, uh... pick up the Nobel for you.)
The only caveat is that this experiment closes the detection loophole, but not the locality loophole; it is theoretically possible that a classical signal could be sent from one qubit to the other quickly enough to fabricate this data. There’s no particular reason to suspect a secret signal is in play, but it isn’t theoretically prohibited.
Assuming you haven’t found a flaw in their mathematics, and that you aren’t alleging that the researchers deliberately fabricated their data, the locality loophole is your best (and likely only) avenue to dispute their conclusions. However, if you wish to pursue that, you should keep in mind that there are many other experiments which close the locality loophole but not the detection loophole, and, since 2015, several that close both. Three-card monte may be a better investment of your time.
Uh, wow, at no point have I made the claim that an electrical circuit is not a quantum system. Nor have I claimed that they are incapable of simulating quantum phenomena. Quite the very opposite.
What I did clearly state, and insist as quite relevant, is that entanglement and double slit experiments are hocus pocus and irrelevant distractions. In fact, I stated that this experiment says basically nothing because it merely simulates quantum phenomena within a circuit.
Hello? Yes. Electrons are quantum entities, and assuredly interact with photons which are also quantum entities. This is demonstrated by the photo-electric effect, which we can all notice by placing tin foil in a microwave. Therefore a circuit is indeed a quantum system, since it assuredly deals in electrons.
Wow! Didn't even need to publish a paper about qubits to draw that conclusion! Amazing!
The implication here is that Bell is waste of time, and so is his theorum: such that emission doesn't determine state, especially when you don't look at it.
Great, thanks Bell. I'll be sure to not look at anything until I want to know what it is. True genius at work.
The experiment demonstrates quantum entanglement. I gather you don’t believe this. So how about this: I don’t believe you.
I don’t believe that you could, even theoretically, produce the data from a loophole-free Bell test without invoking superdeterminism, superluminality, or quantum entanglement.
Can you describe how this would be theoretically possible?
The experiment certainly demonstrates "something" in terms of how not to "measure" relativistic effects with macroscopic tools...
And yet, with relativistic particles, the wild claims are made that splitting photons through a substrate, and then passing them through the wall of a polarizing lens, means we can declare ourselves capable of rewriting and erasing history. Eh, not quite.
But hey, where there's smoke, there's fire, so something must be true, right? Let's just make up whatever.
1. Do you think all Bell test experiments ever done were flawed, i.e. none of the observed Bell inequality violations were real?
2. If so, do you think a non-flawed Bell test experiment could be done?
3. If so, do you have a definite opinion about whether or not it would yield a Bell inequality violation?
4. If so, would it yield a Bell inequality violation or not.
5. If you think all Bell test experiments ever done were flawed, can you pick one, preferably one commonly considered a good one, and point out what exactly you think the flaw in the experiment is?
6. If you think Bell inequality violations are or could be real, how do you want to explain them?
Note that those are all yes no questions, well, at least all but the last two. I don't need and want more than a yes or no for the first four because from your comments alone it is not clear, at least to me, what your position actually is.
What if they didn't create an obvious attempt to create a direct knock-off of the latest iPhone? What if they created a comparable device to stand on its own? What if they clearly labeled everything that the phone was really doing? What if they weren't scraping every interaction he user has with the device?
What if they took all of that deceptive effort, and poured it into producing a device that could be trusted, and took all of those efforts to deceive, and instead poured that creativity into improving the fundamental device they wished to create?
What if they made something they could put their names on, without inviting all the consequences that their dubious behavior would surely result in, if we knew who was behind this sort of thing?
What if they could admit to what they were doing, because it sought to benefit their patrons, instead of posing obvious risks to anybody spending $100 on their stuff?