Hacker News new | past | comments | ask | show | jobs | submit login
Quantum mechanics as a generalization of probability (2007) (scottaaronson.com)
209 points by ascertain on Sept 27, 2014 | hide | past | favorite | 79 comments



I'd disagree with the title of this a bit. Rather than describing it as "probability to allow minus signs", I'd describe it as "probability in L^2 instead of L^1". The author actually discusses this a bit towards the end.

It makes sense to ask what is the distance between two continuous probability distributions. It's given by:

\int | p1(x) - p2(x) | dx

L^1 (the space of all functions for which \int |f(x)| dx < \infty) is a weird space, and does not admit concepts like "what is the angle between two vectors".

Quantum mechanics changes this to:

\int |p1(x) - p2(x)|^2 dx

Functions like this are called L^2. Once you put the square in, you can immediately derive a lot of geometry, inner products, angles between vectors, etc.

So I'd argue that QM is probability in L^2.


The author actually discusses this a bit towards the end.

Far from it. As early as the first subsection, "A Less Than 0% Chance", Scott writes:

"Now, what happens if you try to come up with a theory that's like probability theory, but based on the 2-norm instead of the 1-norm? I'm going to try to convince you that quantum mechanics is what inevitably results."

So discussing that distinction is the entire point of the lecture, and not some minor point "discussed a bit towards the end".


This is what the whole article says - the title is merely more layman-friendly wording.


It is L^2 for infinite dimensional Hilbert spaces like position or momentum, where quantum states become functions. For finite dimensional Hilbert spaces such as spins, quantum state is a complex vector rather than a function, and the "braket" is not an integral but vector product.

But I don't think that's even the point. The whole article is a rationalization of basic rules of quantum mechanics; when I look back at the history of physics, rationalization of known --at the time-- physical laws (which are often replaced by "better laws" after a while when get a better we understanding of nature) is most often counter-productive.


It's L^2 in finite dimensional spaces also - you can just as well define || v || = \int |v_i| dC(i) (where C is the counting measure). Finite dimensional spaces can also be interpreted as functions, i.e. C^3 is the same as {1,2,3} => C.

This is a rationalization but it's useful. We have a physical theory in L^2, and we know that you can't have such a theory in L^2.1 or L^7. Our theories might be totally wrong. But we are confident they can't be only a little wrong.


> It's L^2 in finite dimensional spaces also

The concern of "is this thing square-integrable" arises only in infinite dimensional spaces. Inner product of all $\mathbb C^N$ where N in finite is finite. There is no point in saying "L^2" when talking about spins etc. (forcefully casting a finite sum of finite terms to integration and then introducing the concept of "square integrable" isn't helpful, if not trivial).

> This is a rationalization but it's useful.

You can do that kind of rationalization when you "know" the answer ahead. It has zero prediction power.

Newton had a rationalization about the independent nature of time. Descartes also had a rationalization about how action-at-a-distance works for gravity. Aether was also a popular rationalization once. None of those rationalizations held any value at the end.

Such "rationalizations" are better cut by Occam's razor and left for philosophers.

At least, I prefer not waste my time with such stuff as a physicist.


I also disagree; the article's rationalization is productive.

Your mention of Occam's razor is key. The point of this rationalization is that it shows quantum mechanics (or at least a subset of the theory) is mathematically simple, thus serving a threefold purpose:

1) Due to Occam's razor, a simpler theory is more likely to be correct than a more complicated theory with the same explanatory power. If quantum mechanics can be expressed more simply, then the estimated probability of its correctness should be increased (by only a slight amount).

2) Quantum mechanics, explained this way, is interesting from a purely mathematical standpoint. Even if we knew quantum mechanics doesn't describe reality, mathematicians (and theoretical computer scientists like Aaronson) would perhaps still investigate it. Of course, this may not interest physicists.

3) The article presents a novel way to teach quantum mechanics. Mathematical simplicity can (to some extent) replace intuition as a way for learners to grasp the theory. As Aaronson remarks, quantum mechanics is often taught by following the historical order in which the ideas discovered. Starting from the "conceptual core" (if Aaronson is correct about the conceptual core) is arguably a superior pedagogical technique.


The problem with Occam's razor is that there is no generally agreed-upon notion of simplicity. Occam himself believed it proved the existence of God, and only God. After all, what did you need material reality for when it could be "explained" as God's dreams and imaginings? To Occam that was far "simpler" than the messy reality of matter, and the messy reality we know of today would have been anathema to him.

"Simpler" theories are not ab initio more likely correct. The world is full of "simpler" theories that are wrong: the four elements, the caloric theory of heat, Newtonian dynamics and gravity, and so on.

"Simpler" is a purely human notion, and a heavily culturally laden one at that. For this reason, Occam's razor is best left in the dust-heap of philosophy. It is never useful in doing actual science, except now and then by accident.


Occam's razor doesn't say "pick the simplest theory", it says "pick the simplest theory out of all that agree with all discussed experimental evidence". We didn't throw away four elements because they weren't simple enough; we did it because that theory didn't explain the things we saw.

As for the notion of simplicity itself, we have Kolmogorov complexity and Solomonoff induction, they capture the essence pretty well (though I heard that with some caveats).


That's actually not correct. From what I can tell, most theories with god in them that also predict reality can all be converted to a simpler theory without god.

The problem with all your examples is that Occams razor says "choose the simplest theory that works", not the simplest of all possible theories.

I explored this idea in more detail here: http://www.chrisstucchio.com/blog/2014/why_to_reject_complex...


If you going to pick nits like amateur philosophers and don't like the name Occam's razor, let me put it in more direct terms less open to "weird" interpretations so that hopefully you'll see what I'm saying: "just because I can" is not a good justification to put unnecessary concepts in a theory. We already know about probability amplititudes and we do understand them quite well (path integral formulation). There is no need to introduce crazy things such as "negative probabilities" into the theory, because 1) it doesn't make sense as a fundamental concept (and it just isn't) 2) more importantly, it is not needed 3) and it doesn't add anything new to our understanding of nature or predicts anything at all.


There is nothing simpler or more elementary than the quantum mechanics we already know on that page.

Don't take his word on how quantum mechanics is thought. While we don't teach undergrads it, there is a conceptually simpler approach to quantum mechanics: Feynman's space-time approach. Not only it is conceptually simple and intuitive, it most importantly gives an elementary understanding to the principle of minimal action (sum over all possible paths in space-time) but it also offers a way of calculation that is much better suited to certain class of problems. Now, that is useful.

If you're a layman, you can read "QED: The Strange Theory of Light and Matter". If you know some physics and maths, you can read: http://www.feynmanlectures.caltech.edu/III_03.html

That is how quantum mechanics is made simple and intuitive (and that is superior pedagogically). Not by introducing new and strange additional concepts such as negative probabilities as basic things just because you can.

The reason we don't teach path integrals in the undergrad is, we expect students to actually use it for calculations, and unfortunately the mathematics is much more involved in comparison to matrix mechanics or wave equations where you can get away with "basic" maths. If you have a very good intuition and understanding of quantum mechanics, it is doable though, see Feynman Lectures on Physics Vol. III (not sure how many percentage of the students will actually be able to absorb the intuition along with the new information though).


Thanks. That makes a bunch of sense. (no sarcasm)


Per the footnotes at the bottom this builds on the work of Hardy a few years ago, who demonstrated that QM could easily have been derived by Victorian era mathematicians as a generalization of probability theory available at the time.

While not everything in the book is perfect, it also reminds me of Penrose's Road to Reality, which painstakingly takes the question "what is a number?" to build up a sophisticated understanding of what that means and then explains how this impacts Physics in a very real way.

We've made so many advances in recent decades we sometimes forget to step back and reformulate our teaching methodologies to incorporate what we know, simplify the teaching, and make the ideas more accessible earlier, so we can put more minds to work on extending them. That's a shame.


...also regarding his final point about the speed of light - of course special relativity which explains this is similarly a generalization of older theories such as Galilean relativity.

If you look at the work of Feigenbaum - e.g. http://arxiv.org/abs/0806.1234 - you'll see how similar to Hardy deriving QM purely by continuing older lines of thought, the fact the universe has a maximum possible speed that anything can travel - the speed of light (though it's not strictly about light) can be derived from postulates as old as Galilean relativity. The speed of light cannot be infinite - it must have some value.


Much simpler derivation of Lorentz transform: http://www.jackpenkethman.com/speedoflight/speedoflight.html


Is this the same argument that Minkowksi came up with in 1908, describing it as "staircase wit" (as it would have been impressive had it been thought of before Einstein's theory)?


I love this description of quantum. The paper Aaronson links to, "Quantum Theory from Five Reasonable Axioms" (http://arxiv.org/abs/quant-ph/0101012), is also a great read.

The "probability in L^2" cleared up a lot of confusion for me, although I still have a very poor intuition for what the Born probabilities are probabilities of. If you believe the MWI story, it seems like it's the probability you will "find yourself in the universe where this outcome happened" but even that sounds odd to me.


>I still have a very poor intuition for what the Born probabilities are probabilities of

I'm not sure anyone knows at a deep level. Experimentally you can count how many particles go a given way and it matches the calculation but how that actually works I think remains a mystery.


There's an interesting article on negative probabilities by Gábor Székely in which he shows that they can be generate by flipping a "half coin" -- a coin that, if flipped twice, gives the result from a single flip of an ordinary coin.

http://wilmott.com/pdfs/100609_gjs.pdf


FYI Scott Aaronson wrote Quantum Computing since Democritus[1] and it is an amazing book. Highly recommended for anyone interested in math, physics, theoretical computer science, quantum computing or just good science writing.

[1] http://www.amazon.com/Quantum-Computing-since-Democritus-Aar...


As other people pointed out, it is probability using the L2 norm versus the standard probability that preserves the L1 norm. This means the following: Standard probability distributions are vectors that sum to 1. I.e. x=[0.5 0.2 0.3] is a probability distribution because each x(i)>=0 and sum x(i)=1. This means that the l1 norm of distribution vectors is always 1.

The L2 norm of a vector is the square root of the sum of the entries squared. Hence a distribution would be a vector x(i) such that sum x(i)^2 is one. This allows x(i) to have both negative and even imaginary numbers.

This awesome talk by Scott http://www.scottaaronson.com/blog/?p=1345 explains the difference between L1 and L2 probability using the Latke vs Hamentaschen Farsical debate. See the video from the link.


BTW, Stanford is offering an Intro to QM through their OpenEdx platform starting Sep 30th.

https://class.stanford.edu/courses/Engineering/QMSE-01/Fall2...


Saying that most quantum physics courses are taught from the historical point of view rather than starting with the math seems like a bit of a straw man.

My experience (Rome University) was that first we were taught the basic mathematics (Hilbert spaces, functional analysis) for almost a year, before being introduced to the physics which followed a rather axiomatic (not dissimilar to this).

Sometimes our professor would casually drop bombs like the title of this article or things like: "you know in the end quantum mechanics is just Markov chains in imaginary time", but even those made sense in time (Wick rotation).


The author of the post might just be familiar with typical American freshman "physics for non-physics majors" classes, rather than regular physics curricula for majors. The general freshman science courses often do have a heavy historical aspect to them. But that's in part because that's a sub-goal of the general-ed part of the science curriculum: to teach basic history of science as well as some basic science.


The author probably has no experience with modern physics education. He probably met somebody who learned these things 50 years ago and keeps repeating this unfounded notion.


Scott Aaronson teaches at MIT... I'm sure he has some experience with modern physics education.


He teaches CS, not Physics. What they are teaching in the CS department has nothing to do with what physicists learn.


> What they are teaching in the CS department has nothing to do with what physicists learn.

This line of thinking is a huge problem in science and the reason Scott has to write articles like the one discussed. "Yeah, so those computer guys are so full of themselves that they dare say they know something about how the universe works". Well, the thing is, they actually do.


I'm a physics undergrad: his notion is correct :P


picking up on a tangent in that article:

<i>Two other perfect examples of "obvious-in-retrospect" theories are evolution and special relativity. Admittedly, I don't know if the ancient Greeks, sitting around in their togas, could have figured out that these theories were true. But certainly -- certainly! -- they could've figured out that they were possibly true: that they're powerful principles that would've at least been on God's whiteboard when She was brainstorming the world.</i>

To me, evolution is a perfect example of the need for practical knowledge. Darwinian and Lamarckian evolution are both absolutely reasonable theories; it's just that only one of them matches the world we live in.


Actually, both. Heritable epigenetics is Lamarckian evolution.


How could the Greeks possibly have invented special relativity?


Feynman also wrote about negative probability and quantum mechanics: http://cds.cern.ch/record/154856/files/pre-27827.pdf


Even the best minds were not always perfect (at some point, he thought positron could as well be thought as an electron going backward in time ---which we today know isn't the case).

This is a quite similar story: ghosts have the wrong sign for the kinetic term: http://en.wikipedia.org/wiki/Faddeev%E2%80%93Popov_ghost

Note that he of course didn't try to introduce negative probabilities as basic things.

In both cases it was just a matter of interpretation --just with "bad" physical consequences.


I like the tone.

> Basically, quantum mechanics is the operating system that other physical theories run on as application software (with the exception of general relativity, which hasn't yet been successfully ported to this particular OS).


Yeah, it's absolutely nuts to hear that the copenhagen interpretation is still taught in universities, because it's such a desperate attempt to avoid admitting this truth of how fundamental QM is to reality.


The Copenhagen interpretation is no longer dominant among physicists, but I wouldn't say it's widely agreed that QM is "fundamental to reality", at least not in a strong philosophical sense of that phrase. Plenty hold a position along the lines of: QM is a mathematical model that agrees with experiment, so far.


_Every_ scientific theory is a model that agrees with experiment, so far.


That is one position, yes, although surprisingly not all that common among working scientists, because it's often seen as too skeptical of a position. The strong version, "instrumentalism", boils down to roughly: science does not discover any "truths" about the universe, but rather is just a process of building mathematical models that correctly predict observations. And we can say little else about these models except that they predict experimental data accurately so far. In particular, in this view, we cannot say that any components of the mathematical models are necessarily physically "real" or in any strong sense "explain" reality, merely that they correctly predict observed regularities.

Despite not being that popular a view among scientists generally, it is however a fairly popular view among quantum physicists, many of whom aren't that willing to commit to the physical reality of a good deal of the mathematical apparatus of QM.


"A simplified but useful picture of the goal of scientific research is that scientists obtain large amounts of data about the world via observation and experiment, and then try to find regularities and patterns in that data.

But a regularity or pattern is nothing more or less than a method for compressing the data: if a particular pattern shows up in many places in a data set, then we can create a compressed version of the data by describing the pattern only once, and then specifying the different places that the pattern shows up. The most compressed version of the data is in some sense the ultimate scientific description.

There is a sense in which the goal of all science is finding theories that provide ever more concise descriptions of data."

http://arxiv.org/pdf/1312.4456v1.pdf


This. Also like to add that when people talk about a theory being nice or natural, they are saying that they find it easy to compress because it looks in some respects like theories they already know.


How does the copenhagen interpretation refute the fundamental nature of QM?


I don't have the math to understand this. If I wanted to, what should I seek out on Coursera or similar?


Linear Algebra -> Real Analysis -> Functional Analysis.

Linear Algebra and Analysis can be either applied or pure and of many different levels of sophistication.


Here's a quick list off the top of my head. On the left is a Quantum Mechanics concept, on the right is the math concept its related to.

* The Schrodinger Equation --> Differential Equations. Wave equation.

* Wavefunctions and Normalisation --> Probability Density. Differential Equations and Function continuity

* Bohr Model --> Algebra

* Bracket Notation --> Matrices, vectors, vector spaces, linear independence

* Operators --> Transformations, matrices, unitary matrices

* Commutators --> see operators! maybe a little algebraic field theory for motivation on why AB - BA is not necessarily 0.

* Eigenstates --> these are special wavefunctions. Eigenvalues and Eigenvectors

I recommend Paul Dirac's original Principles of Quantum Mechanics for a baptism by fire. He's the primary source for a lot of this stuff. He simplified a lot of the methods in QM with his Bracket notation.

Sorry if I missed something.


This course https://www.edx.org/course/uc-berkeleyx/uc-berkeleyx-cs-191x... is a really good introduction in my opinion. It starts with the qubit and works its way up to quantum fourier transform. That sounds a little scary but the course is really well done and you just need some basic understanding of matrices like how to multiply them to get started.


I haven't finished reading but it looks like its mostly linear algebra and matrix multiplication.


In reality you need to spend years thinking about these ideas before you can write down the equations on-demand. I would start with Lagrangian mechanics.


For reading up on p-norms related to the article, I found the wikipedia page on Lp spaces[1] to be fairly accessible as a learning tool.

Not sure if it's my learning style, but most math-related wikipedia pages read as a reference and seem to assume prior knowledge (especially with notation).

[1] http://en.wikipedia.org/wiki/Lp_space


> most math-related wikipedia pages read as a reference

This is by design, which is why it is called an encyclopedia. The fact that it assumes prior knowledge just means that you need to refer to other entries when in doubt.


Nope, because the math articles are mainly made up of circular references which only use generalized, abstract examples to illustrate.



"As a direct result of this "QWERTY" approach to explaining quantum mechanics - which you can see reflected in almost every popular book and article, down to the present -- the subject acquired an undeserved reputation for being hard."

The same goes for cryptography. Most cryptography courses spend at least the first hour talking about historical irrelevance like substitution ciphers etc. Crypto I [1] (Dan Boneh) follows the latter approach, i.e. starting from modern theoretical principles, defining security properties in terms of computational complexity and games.

I quite liked the Quantum Computing course [2] (Anuj Dawar) from the Cambridge CST, which also followed that approach, though it didn't present this stuff as a "generalisation of probability". No-cloning theorem in 3rd or 4th lecture, IIRC.

edit: After reading this article fully, I think it would have made for a good "lecture 0" in the above course, bridging the gap between more elementary maths and it.

[1] http://coursera.org/course/crypto [2] https://www.cl.cam.ac.uk/teaching/1415/QuantComp/


> We've talked about why the amplitudes should be complex numbers, and why the rule for converting amplitudes to probabilities should be a squaring rule.

The squaring rule is actually a special case of multiplying a number by its complex conjugate, which the article doesn't mention, unfortunately.

That is to say, if we have a number z = x + iy, we can obtain its norm from sqrt(xx + yy). But another way to express this is simply sqrt(z z). The product z z is just (x + iy)(x - iy). That of course is just x^2 - (iy)^2 which goes to x^2 - (-1y^2) -> x^2 + y^2.

Geometrically, the conjugate of a complex number has the opposite angle. If z is 20 degrees from the real axis, z* is -20 degrees. Since multiplication of complex numbers is additions of their arguments (i.e. angle components), the two cancel out and the result is on the real number line.


Between Scott Aaronson and Eliezer Yudkowsky, who both wrote articles explaining QM "directly from the conceptual core", is there any textbook that follows this path further and with a lot more of details?


John Bell's book of lecture notes, "Speakable and Unspeakable in Quantum Mechanics"


I'm having trouble understanding his explanation of interference. I understand applying the 45 degree counter-clockwise rotation twice would transform the qubit from |0> to |1>. I don't understand how this implies that there are two paths to state |0>. How could those two rotations could get you anywhere besides |1>?


> How could those two rotations could get you anywhere besides |1>

Obviously they can't, which is why the math shows that they don't.

After the first rotation, you are the state (|0>+|1>)/sqrt(2). The physical interpretation of this this state is that it represents a 50% chance of being in |0>, and a 50% chance of being in |1>. If you apply this rotation to either of those two possibilities, you arrive back at (|0>+|1>)/sqrt(2), which still has a 50% chance of being in the state |0>. The two paths leading to this are when the intermediate state is |0> or |1>.

When you actually do the math (in which "rotation" is just a name we give to multiplying by a unitary matrix, U". You find that you end up in the state (.5-.5)|0>+(.5+.5)|1> = .5|0>-.5|0>+.5|1>+.5|1> = |1>.

Here, we can again see the semblance of 2 paths leading to zero (the two |0> terms), however they have opposite signs, so cancel out.


Aha, thanks to you and ufo for the explanation. I think the problem was that I was thinking about it in complex exponential notation. If I had done it in matrix form, the cancellation of the two amplitudes of the |0> state would have been much more obvious.


Its not two paths to state |0>. Its one path to 1/sqrt(2)|0> and one to -1/sqrt(2)|0> and those cancel out.

If you start at |0> and do a 45 degree rotation you end up at (|0> + |1>)/sqrt(2). If you start at |1> and do a 45 degree rotation you end up on (-|0> + |1>)/sqrt(2) (note that the coefficient on the |0> is negative now).

Now the trick is that rotations are a linear transformation so rotate(a(x+y)) = a(rotate(x) + rotate(y)). In our case, when we rotate |0> twice, we first end up at (|0> + |1>)/sqrt(2), then we can use linearity to split that into a |0> and a |1> component, rotate them individually and then add up the the results. When we rotate the |0> component we get a +|0> and when we rotate the |1> we get a -|0> and those cancel out (destructive interference).


I'll play! What necessitates the requirement that probability amplitude varies continuously? (This requirement is assumed in the section "Real vs. Complex Numbers", the so-called "continuity assumption").


I was a bit bugged by the section about the density matrix. He writes:

> Then you compute the outer product of the vector with itself

I'm not sure what he means by outer product here. Isn't the outer product of a vector by itself always nul?


The outer product of vectors `u, v` is the matrix `A_ij = u_i · v_j`. That is, a matrix containing the product of each component of u with each component of v. You may have been thinking of the cross product.


It's just the Kronecker product or tensor product: (vector)⊗(Hermitian conjugate of the vector). It's not null unless the vector itself is null.


So basically he is complaining that physicists go to the trouble of learning physical phenomena that support quantum theory, instead of learning directly the mathematics of the theory. Does he really know the meaning of science and how it works? Of course the math is important, but the math will do nothing for you if you don't understand the evidence for the theory, and how it can be falsifiable. These are things that you can only grasp from the history of how physics got here.


[deleted]


Scott Aaronson has a bunch of well-cited papers related to quantum algorithms [1]. That merits you at least explaining how his explanation is not illuminating instead of blankly asserting it. I know that I found it pretty illuminating when I ran into it the first time.

1: http://scholar.google.com/scholar?hl=en&q=scott+aaronson+qua...


> That merits you at least explaining how his explanation is not illuminating

I think that if someone could explain this, they probably wouldn't need the explanation. You can't ask "why don't you understand?" and expect to get a meaningful answer.

(They might be able to point at a specific bit and say "you lost me here", but that's a what, not a how, and they might not be able to.)


What an excellent write up.


I wish I'd seen this is 2007.


I read this and didn't understand anything. Can anyone ELI 5 ?


Sure! To quote Scott's own TL/DR:

Quantum mechanics is what you would inevitably come up with if you started from probability theory, and then said, let's try to generalize it so that the numbers we used to call "probabilities" can be negative numbers.

That's it. So in QM you can model events whose probabilities 'interfere' with each other by canceling each other out (eg independent events A and B have 'probabilities' -20% and +20% respectively, but you want the 'probability' of either A or B occurring to be 0%), and do all other sorts of weird stuff.

Now why would you want to do such a thing is a whole different matter. But at least this should get you started.


Hmm that makes sense in a weird sort of way. Positive and Negative probablities canceling each other out out.


Scott Aaronson.


If anyone else was curious about Scott's usage of "God" in this lecture, he talks about it here[1]. TLDR: they are "tongue-in-cheek references to an Einsteinian God."

1: http://www.scottaaronson.com/blog/?p=189


I wish people would use the word "nature" instead of "god" in public lectures and writings, to avoid seeming to lend credence to revealed religion. Insiders know exactly what Einstein meant by the word, but the problem is not insiders.


Can you expand a little bit on that line of reasoning?

It seems to me reasonable to say that the physical sciences can, for example, lend very little or no credence to the claim that Jesus of Nazareth was the long expected Messiah of the Jews, i.e. because a judgment on the matter seems rather outside their scope.

But let's consider a different claim: that the existence of God, the origin and end of all things, can be known with certainty by the natural light of reason.

To develop the idea we might reasonably consider whether any of the loftier domains of the physical sciences, e.g. physical cosmology, can tell us anything about God's existence. Fr. Robert Spitzer, S.J.[1] (among others), has been writing[2] and speaking articulately on the subject for a number of years, and a fair bit of his material is freely available online[3].

[1] http://en.wikipedia.org/wiki/Robert_Spitzer_(priest)

[2] http://www.amazon.com/New-Proofs-Existence-God-Contributions...

[3] http://www.magiscenter.com/video-clips-and-more/


> Can you expand a little bit on that line of reasoning?

Yes, certainly. Einstein was called out on his frequent allusions to god in his public talks and writings, and under some pressure he finally described how he saw god and religion.

Einstein said that his references to god were in fact with respect to Spinoza's god, an abstract god who played no part in human affairs and that bore no resemblance to the god religious believers picture. In other words, nature -- not a judge, but a morally neutral environment.

> But let's consider a different claim: that the existence of God, the origin and end of all things, can be known with certainty by the natural light of reason.

But that's not possible without evidence. Let me explain the difference between a scientist's attitude toward issues of fact, and a religious believer's attitude.

To a religious believer, a claim is assumed to be true until evidence proves it false. To a scientist, a claim is assumed to be false until evidence proves it true -- the exact opposite.

Why do scientists take this position, formally known as the null hypothesis? Because it's the only rational way to address issues of evidence. Let's take Bigfoot as an example -- to a nonscientist, Bigfoot exists until his nonexistence is proven. But disproving Bigfoot's existence requires proof of a negative, which is an impossible evidentiary burden.

Bigfoot could be hiding under some rock on a distant planet, therefore proving his nonexistence is not possible, therefore Bigfoot exists. Therefore everything exists -- UFOs, fairies, a teapot orbiting out in space in Bertrand Russell's famous argument on this issue (http://en.wikipedia.org/wiki/Russell's_teapot), and god -- all without a shred of evidence.

Imagine if law adopted a religious outlook -- people would be guilty of any crimes they were unable to prove they didn't commit. But law (at least in modern times) adopts an approximately scientific attitude toward evidence, usually codified as "innocent until proven guilty."

This is the real meaning of the chasm between religion and science, and it's not a trivial one.

> To develop the idea we might reasonably consider whether any of the loftier domains of the physical sciences, e.g. physical cosmology, can tell us anything about God's existence.

Very easy to answer -- without evidence, no such claim can be sustained. Full stop.


The good thing about quantum mechanics is that you don't have to know anything about it and still say stuff with it that sounds incredibly profound. "Everything is just a probability! We are all waves, maaan."

(Sort of like Freud's psychoanalysis. Everything is a penis, or your mother.)

This has probably nothing to do with the article though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: