So, the magic number here was 5σ, the generally accepted "gold standard" for discovery, which would mean a 1 in 1.74 million chance that the results occurred by chance rather than being a signal. As other commenters have pointed out, the presenter originally announced a 4.1σ observation, then continued to add data from other experiments until the combined result was 5.1σ. However, right at the end, he added in some additional data which actually brought the significance down to 4.9σ... That's science - you can't ignore data just because it ruins your big presentation.
IANAPhysicist, but I'd be interested to know how strict the 5-sigma discovery rule is considered - for example, could they still get a Nobel prize for a 4.9σ announcement? I suppose it's not that big a deal - the LHC is still running, and I'm sure they'll have enough data for a true 5σ announcement soon. Regardless, hats off to all involved, it must be exciting to be at the forefront of human knowledge :)
To be explicit, since this comment is still top voted: This comment was made by someone who only watched the first of 2 presentations.
The second presentation showed a 5 sigma result.
I'm Incredibly disappointed by the trivial inaccuracy of comments on hacker news lately, and that corrections never get upvoted quickly enough to prevent the spread of misinformation.
> which would mean a 1 in 1.74 million chance that the results occurred by chance rather than being a signal
No!! It's the chance that randomness could produce a result that large. This distinction sounds pedantic, but misunderstanding this is widespread and leads to fallacies committed frequently by very smart people who ought to know better in many fields.
Imagine testing 1000 potential cancer drugs. Only 100 of them actually work, but you don't know that; you have to do the trial. So you get out some petri dishes and start testing. You look for a 5% chance of false positive (p < 0.05), which is the statistical significance level usually used in medical trials.
Of the 100 real drugs, you detect all of them. Of the 900 fake drugs, 5% of them falsely appear to work. So you have 145 drugs you think work, only two thirds of which actually work. The chance of any individual drug having obtained its positive results by chance is 31%, not 5%.
I have a guide to this here, since it's so common:
Sorry, I wasn't very clear. Usually people quote the p value as the chance the result is a fluke; the p value in CERN's case is p = 1/1,740,000. But that's the chance that the effect would be produced if the Higgs did not exist, which is different.
By analogy in the medical case, p = 0.05. The incorrect interpretation is that this means only 5% of drugs with statistically significant benefits actually achieved these benefits through luck; rather, the right interpretation is that 5% of the nonfunctional drugs somehow appeared to work.
You could also imagine testing 200,000,000 hypotheses which were all completely false. Even if you used CERN's level of statistical significance, you'd still quite likely find one hypothesis which appears to be true, simply by chance. The chance of that hypothesis being false is 100%, despite the significance level of 1 in 1,740,000.
So yes, 31% is exactly the chance that randomness produced the effect in the trial. But people will try to tell you that it's actually 5%, and they're wrong.
This thread started with the claim that "the chance that the results occurred by chance" was different from "the chance that randomness could produce [the result]".
But you're saying that in your example both are 31%. So again, I ask, are we talking about two separate things? And if so, can you give an example where the two things have different values?
In my medical example, "the chance that the results occurred by chance" is 31%. "The chance that randomness could produce [the result]" was only 5%.
For CERN, the chance that randomness could produce this result is 1 in 1.74 million; the chance that the results occurred by chance is larger, but not computable with the information we have,
The guide I linked to above gives a much better explanation than this. I rushed my first post here, and I think I was unclear.
Imagine flipping a perfectly fair coin 100 times. You'd expect to see 50 heads, but you don't always -- it's just an average. Suppose you see 75 heads. What is the chance that you'd see 75 heads with a fair coin? Very very small. The chance that randomness could produce such a result is small.
Now, imagine you test 100 perfectly fair coins. A few of them give more than 75 heads, just by luck. You conclude they're unfair, since the result is unlikely otherwise. The chance that randomness produced the effects you saw is actually 100%, because all the coins are fair.
There's a difference between the question "How likely is this outcome to happen if the coin is fair?" and "Given that this outcome happened, how likely is it that the coin is fair?" Statistical significance addresses the first question, not the second.
I suspect Almaviva is talking about the biases inherent to publication / talking about results.
Consider, by analogy, the event "rolling a six sided die and getting a 6 and announcing that fact to the world".
"What is the probability that random events could produce a result that large?": one in six, per die roll. The question excludes the whole "announce it to the world" filter.
"What is the probability that these results [getting a six and announcing it] occurred by chance, rather than being a signal?": We have no idea. If the person announced "I'm going to roll one die and announce the results, regardless of the outcome", then it's one in six. If they kept rolling dice until they got a six, then the probability is 1. If they rolled 3 dice, then the probability is 91/216.
The point is that the scientific method has all sorts of biases (publication bias, confirmation bias, etc.) and p-values are rarely "probability that the result is wrong".
Standard null hypothesis testing -- including what they're talking about here -- focuses on
P(results | random noise is active)
You're talking about
P(the thing we care about | results)
The former is more like a simple sanity check. If random noise could have produced what you see, you shouldn't take the results too seriously.
The former tells you a little bit about the latter -- which is good, because the latter is what we actually care about. But you can't explicitly compute the latter without making much stronger assumptions like priors and the like. That's why this last step of reasoning is often performed qualitatively.
1. I'm rolling two dice to try and get the highest total. I get two sixes. What is the chance randomness produces this? 1/36, about 0.027. This more than a two sigma result. What is the chance that this is caused by random chance? 2.7%? Nope, 100%.
2. I study the same thing in a million situations in parallel. I take the most extreme result and find that random chance can produce this one time in 1.7 million. It's a five sigma result! What are the chances this result is caused by random chance?
The 5σ convention is more about announcement than about discovery. No one can really say when a discovery is made. As more information comes in, the image can remain fuzzy or can get clearer and clearer. In the latter case, 5σ is a convention as a threshold for when you are entitled to announce a discovery without being blamed for jumping the gun. They'll collect data for many more years before passing the threshold of Nobel-worthy discovery.
I think these physicists have been watching too many Steve Jobs presentations. "4.9σ... Oh, but one more thing!" ;)
In all seriousness, does anyone know how common it is to do this kind of gradual reveal during scientific presentations vs. stating your basic final results in the introduction? It's kind of fun, but you could cut the tension in that room with a knife!
The way they handled it was perfect. For over 20 minutes each, the presenters for CMS and Atlas explained their measurement capabilities, methods, recent improvements, and recorded measurements before stating their conclusion. They made sure to cover their bases and justify the conclusion before making a big announcement. I'm pretty sure the abstract for the papers will state the conclusion early on but the initial announcement for the greatest achievement in Particle Physics shouldn't be prefaced with "We found it!" just for the convenience of the impatient.
5σ is not a rule. It's just a measurement of the probability that the result you observe is an accident. It means that it's very unlikely that it was just pure luck.
By the way, σ measurements are also used in many engineering jobs, and quality systems measurements. The kind of industry "goal" is to get 6σ (let's say when you manufacture consumer goods in large quantities, like millions every month). But for some other industries, 9σ is the benchmark - and from what I know, air companies have quality systems up to 9σ to ensure the lowest risk of something wrong happening.
How can they possibly have up to 9 sigma? 6 sigma is a failure rate about 1 in 10 billion, whereas 9 sigma would be 1 in 10^19 or so! That's oddly similar to the number of grains of sand on the earth![1]
The math will happily pop out absurd numbers. Remember that all stats carry assumptions with them, such as "this is a Gaussian process from negative to positive infinity" and "everything is absolutely perfectly independent" and other things that break down if your push them hard enough. What that is telling you is just how hard you're pushing the math rather than the real odds. I'm not 1 in 10^19 sure of anything.
>IANAPhysicist, but I'd be interested to know how strict the 5-sigma discovery rule is considered - for example, could they still get a Nobel prize for a 4.9σ announcement?
It's very much a HEP thing, and done that way because HEP is pretty much all statistical analysis these days. Other fields wouldn't treat the sigmas as the most important thing, and I've heard mutterings that it's not really the most accurate approach - but it is objective and easy to apply.
5σ seems like an unnecessarily high standard to this non-physicist; what's the rationale for that? At 5σ we could publish 1,000 major discoveries a year and have E[false discovery accepted] = 1,740 years.
The two main rationales are that systematic uncertainties are historically under-estimated and that we are looking in many channels (more than 1000), so it would not be too hard to find a 3 or 4 sigma anomaly. The second part is the so-called "Look Elsewhere Effect." If you hit 5 sigma, you are fairly safe from either of these effects ruining your "discovery."
> The second part is the so-called "Look Elsewhere Effect."
Also more generally known as the "multiple testing" problem, fwiw (not sure why it has a different name in physics, unless I'm missing a subtlety).
It's a major problem in "big data" also, where people just data-dredge thousands of possible parameter choices and pairwise correlations, and then report the p<0.01 results that came up, even though you'd expect several false positives just by chance with that methodology.
Actually, it's because the rest of you "scientists" end up publishing bullshit results that you got by chance.
I'm kind of joking - most other scientists don't collect enough data to have to worry about 1 in 10,000 events happening by chance. In medicine, though, I'm not joking at all - those guys publish absolute statistical garbage all the time, I hesitate to even consider it a science because the data dredging is so bad. I can "prove" just about anything if the publication standard is 95% significance...
The question this actually answers is: Assuming it was luck (ie there's no Higgs), how likely is it to get the measured result (or one even more extreme)?
Exactly. P(results|luck) and P(luck|results) are two very different things. We don't know the probability that the results are due to "just luck" (whatever that means).
I think it's quite dramatic how many very intelligent people (up to writers in major publications) are uneducated about this distinction, and how dramatically wrong decisions can be if this is misunderstood. If we could teach this, as well as correlation vs causation, I'm convinced the intelligent public would make much better decisions about many things, medicine and nutrition to name a couple.
I've given up. Considering how this is supposed to be a big announcement which is probably important for a number of reasons that may affect a lot of people. I am surprised he didn't start with:
"I know a lot of people are tuning in without degree's in Physic's. Let me break it down for you in Leymens terms. We are fairly certain we have discovered this. It is important because of that. Now let me get onto why we think this."
I get that this talk is not meant for me. However, it is important - apparently. Its on the front page of Guardian.
If this is an announcement of great importance and it is 98% mumbo jumbo aimed at high end Physicists or whatever then.. I don't know. Its another chance to get people interested in science that has been missed.
Note: I am not saying the whole talk should be dumbed down. I am just saying a 2-3 minute prefix for those who do not understand a single word for the first 20 minutes of the presentation.
I am a particle physicist. The most interesting point of this announcement for me is that this is a confirmation of predictions of a very exotic particle.
The properties of subatomic particles include something called 'spin' - that's a fundamental quantum mechanical property. The higgs boson is the first elementary (i.e. not made of other particles) spin-less particle that we've discovered; it's completely unlike anything that we've seen up until now.
That the model that we have constructed can accurately predict its existence and the way that it decays without having observed anything like that beforehand is a huge confirmation that we're in the right region of model space. Today seems to be a huge confirmation that out understanding of physics is not fundamentally broken.
That's why its important; the prediction is like attempting a 5-point dive and nailing it pretty much perfectly. It's an impressive confirmation of 50 years of theoretical work.
(The anarchist in me would have preferred them not to find anything, I must admit. That would have been much more interesting, as the standard model came tumbling down... :-)
There will be time for popularization for the laymen. It is not now. It's more important that the knowledge be transmitted accurately and completely, than it is to give a reader's digest for the laymen. We'll have to wait until all of the discovered information is processed, and then summarized. Making a summary early on in the discovery is likely to contain errors. It's also better handled by people other than physicists working on the project.
The standard model predicted it and it was found. That's a major win for the standard model, which is the result of many years of theoretical work, as said above - it was able to predict something completely new, something that was never observed before.
I'm not deep enough in the field to see what questions could be answered, and we will see, but it's always a good thing to be able to rely on your model of the world.
So - this is not about other questions in the first place, it's about the validity of the standard model. We can continue from there.
(A common title for talks etc is "Physics beyond the Standard Model".)
I'm curious what you mean by "pretty much." Did the Standard Model predict 125 GeV?
I'm asking because I've seen a few casual descriptions of this Higgs as a "lightweight." I'm assuming that means it's not as heavy as expected?
EDIT: "If the mass of the Higgs boson is between 115 and 180 GeV, then the Standard Model can be valid at energy scales all the way up to the Planck scale (1016 TeV)."
This isn't exactly my field, but as I understand it a Higgs of 125GeV implies a supersymmetric model with relatively light squarks and no excitingly novel features. Basically, a small adjustment to the standard model that allows for one more family of heavy quarks and not much else.
Even the Guardian reporter has admitted that it goes over her head. Heck, I'm doing a Master's in condensed-matter physics and I don't understand much of the jargon.
What I can tell you is, the parts which sound the most intimidating are actually probably the simplest bits. CERN operates a particle accelerator -- this means that the LHC basically smacks subatomic particles into each other at absurdly high speeds to create infinitesimal explosions with tremendous amounts of energy (these are the TeV, GeV numbers that you see -- they're talking about the amount of energy that was concentrated in the explosion). The explosion essentially disrupts the underlying fields of the universe so much that new particles can be created or destroyed, but if you excite the Higgs field to its quantizing particle, it tends to immediately decay into other things.
The other things are subatomic particles, including quarks (the letters u, d, c, s, t, and b for up, down, charm, strange, top and bottom -- you may have heard him for example say 'bb') and bosons (he talked a bit about W W* and gamma-gamma; gamma rays are light while W bosons are, well, a little more complicated let's say).
All of the stuff he says about Monte Carlo and so on is about creating "expected" curves from the Standard Model. You want to have two curves, "expected" vs. "actual", so that you can compare them.
On the base axis usually there is energy -- this is the energy of the explosion. There are usually two curves from Monte Carlo which tell you what you expect to see. Then there are data points with error bars which tell you what's actually seen and what the statistical "counting" errors are, how weak the signal is. Usually there is then a follow-up graph where they have tried to "subtract out the noise" to see the signal more clearly.
I feel like the statistics going on here is almost as fascinating as the physics. Well, they technically are the same, but so intriguing to see people hooping and hollering at 5.0 sigma.
This is for the scientific community. The scientific method is that all results should be scrutinized, tested and verified. If you want a 2-3 minute explanation. Wait for CNN.
I don't really buy that in this case. It's still high level and summarized and no one is verifying or scrutinizing anything based on that presentation alone.
NASA handles these kinds of announcements well, but then they also announce cyanide-based life. So.
When scientists present results to scientists, they present in a scientific way. I.e. methods, analysis, etc. There is no way a scientist can get by with just a short summary when talking to fellow researchers... it's just not how science is done.
I understand that. I don't understand why announcements of this significance are done like this.
Its like NASA landing on the moon without video and presenting geology findings based on the rocks. Sod that. The people want to see VIDEO! They want to live the moment. I thought this could be one of those moments where something significant was discovered which I may be asked about in many years time. A "this changes everything moment." The way it is presented though may be just that for scientists. For everyone else though.. who cares when the announcement is this technical.
Surely I wasn't the only person wondering if we are not closer to the hover board? That would have been a nice way to start.
Screenshot of hoverboard
"For the leymens tuning in. Our discovery means this is / is not closer to being made."
This isn't an announcement. There's no press. This is CERN doing us a courtesy and letting the public see a presentation they were going to give anyway.
The ATLAS lady has what I think must be the worst set of powerpoint slides I've ever seen. Epically bad. Pretty amazing.
Even if you are working at CERN running the equipment, there's no way you could absorb all the info on each of those slides in the 10 to 15 seconds she shows them. They might as well have pictures of frolicking kittens on them.
Clearly you don't understand how much time pressure these people are under. If you were at CERN you would realize how intense the atmosphere has been in the past 2 weeks.
Also, focus on the content - if you're caring so much about the presentation, then you probably don't understand enough of the physics to comment on the content.
Seems like a lot of it is just comparing more recent data to last years data. I'm sure they have spent plenty of time looking at last years data, so they can probably understand quite a bit at this pace.
This, unfortunately, is not even that bad when compared to the litany of bad, overloaded, eye-scorching powerpoint presentation I usually sit through in research group meetings, conferences, and more. Maybe all intro courses in STEM should include a course on communicative design.
That'll happen, someone will explain it in "layman" terms about why/how this is fundamentally important. That's probably not going to be done at this announcement though, but hopefully in the aftermath by the media reporting on the announcement.
Short answer: They discovered a particle which looks like a Higgs Boson.
They are talking about the standard model Higgs, which is the result of a specific way to break electroweak symmetry. A consequence is, that there are quite well understood predictions from this how the cross sections and branching ratios should look like. And on the current level of statistical significance it looks like a standard model Higgs.
On the other hand, there are so called effective field theories, that is you can start from a complicated theory and derive a simpler theory from it, which behaves the same in some aspects (for example at low energies).
So the more exact answer is probably that now a theory has to contain a Higgs boson in the appropriate limit.
The error bars on the mass range are typically just 1 sigma, so there's considerable overlap between the two figures (the Atlas figure is within the 95% confidence interval for the CMS figure, for example).
In December 2000, Hawking bet Gordon Kane $100 that the Higgs Boson will be discovered at the Fermilab Tevatron.
Since the funding dried up for the Tevatron and the first hints for the discovery of the Higgs boson come from LHC we can conclude that Hawking won the bet and Kane will be the one paying up.
It's possible to combine the data and get a higer significance. AFAIK, Officially, CERN doesn't do these combinations, but independent people do. See here:
This paper combines different channels from CMS, but not CMS and ATLAS.
"In this Letter, we report on the combination of Higgs boson searches carried out in proton-proton collisions at
Sqrt[s] = 7 TeV using the Compact Muon Solenoid (CMS) detector at the LHC.
...
Combined results are reported from searches for the SM Higgs boson in proton-proton collisions at Sqrt[s] = 7TeV in five Higgs boson decay modes: gg, bb, tt, WW, and ZZ."
Those two detectors are attached to the same accelerator, right? My guess is they don't because they cannot guarantee the results to be independent from each other?
Twitter is full of bashing. I don't think it matters. The presentation is already unfriendly to anyone not in the scientific community, so presentation isn't really that important.
However, as everyone else has said, it would have been lovely to have a layman's TL;DR. Perhaps that's the press conference at 11:00, and expecting it beforehand is arrogant.
I love how we're using Hacker News here -- this is definitely not what it was designed for. What we're really doing is something like a live chat room while watching a common talk, but unlike chat rooms it can be threaded and points can be allocated. Also unlike a chat room, HN does not automatically update when we get new discussion messages, but that's a constraint of the technologies at the time it was built.
It might be very interesting to try to use comet-casting or websockets to revolutionize chat in precisely that way, realtime threaded discussions. So, in addition to all of the chat constraints you have the ability to dynamically mark certain chat messages as replying to other messages, and as the noise in the chat room gets higher you can filter yourself to just "I want to follow this discussion."
This is why Google Wave got me so excited - and why I was so disappointed when it was botched and silently killed. I think an app like Wave could be great for many things, including discussing a live event like this.
My thoughts precisely.
The code is still out there though - just waiting for someone to pick up where Google left off. . .
http://incubator.apache.org/wave/about.html
My plan is to try to rewrite this as a sharepoint add in, so that all those companies who install sharepoint as their "solution to everything" get something good with it; that could help grow the user base / promote belief in the technology. Then it could be openned out into allowing people to connect to hosted public services, and eventually get back to Google's original vision.
ps. In honour of the subject matter, perhaps we should be calling it Google Duality?
I'm glad to hear I'm not the only one sad that Wave passed away. I really think it could have found some unique uses, but most people (and, transitively, Google) just didn't give it enough time.
Isn't it open source now? I might try the open source version one of these days... Maybe I can even convince my friends to use it :).
I love how HN users love to make live meta analysis; in most of the topics, some significant percentage of people are not focused much on the topic, but on observing other users' interactions and its techno-historical review. These show opportunities for future products.
Really striking what a large percentage of the words are jargon. Sometimes I understand less than 10% of the words in a sentence. I think I might now better understand how a non-programmer feels when seeing a talk related to programming.
That's true for almost every field of academic research and work; around 70-80% of the words are not in any dictionary (or, if they are, their standard definition has nothing to do with their professional meaning).
What always interested me is how the ratio of new words to repurposed words varies per field.
For example, in CS we use a whole bunch of words like "string", "thread", "class", "type", "object", "arrow", "map" and "macro" to denote CS-specific concepts related at best tangentially to the words' original meanings. On the other hand, biology seems to prefer to come up with new words for their technical terminology.
I wonder if this is a product of different cultures or something like that.
The worst is botany. Botanists use common culinary words to describe almost entirely non-overlapping sets of plants/fruit etc. The "Tomato a fruit?" question is nothing compared to the "berry" thing. According to the botanical definition cherries, raspberries, strawberries, boysenberries and blackberries are not berries, but bananas, watermelon, avocado and pumpkin are.
Nuts are worse. According botanists, peanuts, cashews, macadamias, pistachios, walnuts, almonds, pecans, pine-nuts and Brazil nuts are not nuts. According to most lay-people, though, botanists are nuts.
I don't understand the particle physics they're talking about, but I do find it fascinating how a lot of the work is really how to sort through a massive amount of data to remove all the noise and find the signal. It sounds like they're using some machine learning algorithms to examine and classify various interactions in the data.
I wonder if the people doing it are trained in computer science or physics? Not that it should matter in the end results, just curious how people got there.
A lot of the modern research in 'big data' analysis is/was driven by physicists. Bayesian Inference is about trying to make a decision about what you can infer from an observation or series of observations, and the impetus for this came from trying to make sense of experimental results.
Two of the really great text books in the field are by physicists,
1) 'Information Theory, Inference and Learning Algorithms' by David Mackay, a physics professor at Cambridge. Perhaps the most readable and enjoyable text book I own. Certainly up there.
2) 'Pattern Recognition and Machine Learning' by Chris Bishop, now a director at Microsoft Research in Cambridge but formerly a physicist. Delightfully, under the circumstances, his PhD supervisor was Higgs (yes, the one of boson fame)!
They're probably physicists. One can take IT classes during the study, and some people have a very high skill level. Data interpretation is a big part of being an experimental physicist, and these algorithms are very useful, so people will seek them out.
Computer science people work in other areas, such as setting up and running the data collection and on-line processing. (A professor told us many interesting stories about the many Unix servers they build and the bugs they created..)
few of us are formally trained in CS. some of us are good. others are not.
my understanding is that the computer engineers at CERN are mostly tasked with IT work, the rest (including DAQ software/firmware, network code, distributed+realtime data processing, etc) is made by the physicists.
If you're going to make an announcement that is going to be held up to the highest rigorous standards, figuring out exactly what to say is going to be a difficult issue. This presentation isn't for you, this is for the physics community.
Right now, he's presenting general stuff that everyone in the room already knows, including him. It's just a summary of the status of the project up to now.
He could have practiced this 6 months ago.
Aaaand... at this very second he's starting on the new stuff, I think.
Sounds like the kind of content that is just very hard to condense into a reasonable amount of time. This seems super dense to me, but you can imagine to a physicist this is a high level overview.
Next time you do a major presentation on state-of-the-art high-energy physics experiments live to a world-wide community on what could be a Nobel Prize winning event, let us know and we'll watch you do it better. ;)
Probably because the guy's presentation style is terrible, rushed, and his slides are a graphical disaster. He jumps from overload to overload, connecting with "obviously" and "as you can see". There is no overview, nothing connecting the endless series of slides. Everyone in the room is just waiting for the Higgs announcement.
I gather that they wanted to include last minute data, but given that they're livestreaming this and tons of people are watching, it was a huge chance to get a decent presentation done that would at least highlight the important results clearly rather than having them be throwaway lines between jargon.
I am just a hobbyist, but as i understand it, 5.0 sigma is just the probability of the result being bogus. So 5 sigma is very high probability, something like one in 1.4 million. i.e. the chances of the result being wrong is one in 1.4 mil. 4.9 sigma will mean lesser probability, but i am not sure by exactly how much will that 1.4 mil number reduce.
Designer sits in coffee shop wearing his hipster outfit drinking his hipster coffee, writing incensed blog post about the outrages of Comic Sans.
Physicist makes presentation on what is clearly a state-of-the-art advancement in the progress of high-energy particle physics (and thus, physics) to a world-wide community, live, and NOT a fuck was given as to what font is used. :P
Thank you! It seems very few people understand the immense pressure and intense atmosphere that surrounds work like this, especially in culminating times like these.
Fabiola would likely rather spend her time improving the quality of her analysis, which is undeniably more valuable to the scientific community than agonizing over the font.
I have a 5 sigma level of confidence it was either Chalkboard or Comic Sans. We'll need a real designer to chime in to present their own numerical analysis.
I'm in a place with single-bar wifi signal and the livestream is really choppy.
Does anyone have something like a buffered stream on a delay so I can be sure I don't miss anything, even if I have to stop to buffer more of the stream?
I have a vanilla, chrome browser, and a fairly recent VLC.
Nothing to do with the science, but it looks suspiciously like they are using Squeak to produce their slides. Maybe they are using it in some capacity to collate data?
At the point you made the comment, the full number has not yet been shown. That's only the combined number for Higgs to gamma-gamma for 2011 and 2012 (so far).
Can someone please explain what 5 sigma signifies?
Edit: “Evidence” usually means a 3-sigma signal, which existed last December, “Proof” would be a better way to describe a 5+ sigma signal, if that’s what the combined CMS/ATLAS data shows - http://www.math.columbia.edu/~woit/wordpress/?p=4809
5 sigma is the traditional limit of significance for new discoveries in particle physics. If you have data showing the existence of a new particle or phenomenon at 5 sigma you publish and you announce and you start saying "this thing exists" instead of "this thing may exist".
They just announced 5.1 sigma, and on a normal distribution that would mean that it is 99.999966% certain, so that there is about a one-in-three-million chance that this is due to random chance, otherwise there is some sort of excess here due to a new particle.
Yeah. The gamma-gamma and Z-Z together have been combined into a signal of 5 standard deviations from the no-Higgs Standard Model. Which is the common threshold for a discovery.
Why is potentially the worlds finest discovery of the 21st century being communicated using the worst software to be cobbled together in the 20th? (Aka Flash)
It works. Flash Player is the only browser plugin yet that can do an actual streaming (not progressive download) using either Adobe's own Media Server or other similar Open Source at the back-end. If need be, that is the best form of protection against content theft so far.
It is not that difficult to setup the whole streaming with lots of free and open source solutions available today. It's just a good means to a useful end.
EDITS:
Just as I suspected, it's using a Media Server Streaming Server.
"rtmp://cern.fc.llnwd.net/cern/"
It is also automatically streaming corresponding quality depending on the user's bandwidth.
Latest application of the uncertainty principle - you can have 21st century discoveries or 21st century technology, but you can't have both simultaneously.
It's just very quiet, you're just hearing the spillover from the room into the main mic. If you crank your speakers you'll hear people talking, but be careful when someone talks into the mic :)
I am under the impression, that what you think is not entirely true:
"How should we make it attractive for them [young people] to spend 5,6,7 years in our field, be satisfied, learn about excitement, but finally be qualified to find other possibilities?" -- H. Schopper
The work at CERN is differentiated when it is performed by westerners or by people from the East:
"The cost [...] has been evaluated, taking into account realistic labor prices in different countries. The total cost is X (with a western equivalent value of Y)" [where Y>X]
No attempts, just plain facts: a reminder of those, who took their fair share in contributing to such a scientific achievement but were discriminated negatively compared to their "western equivalents" only for being born in a non-western country.
The cited document (did all the downvoters also take their time and actually read the TDRs and papers in detail??? Or are they just ignorant sheep?) is a rare case of putting the facts on the ground down in a written and approved document, despite being taboo in an organisation touting "equal opportunities" and such policies.
No downvotes will change the situation I warn about above, quite to the contrary: may my previous comment serve as a warning to all non-westerners at CERN for the time being.
If you have difficulties in accepting criticism based on a factual quote from a (peer!) reviewed document, then you might consider changing your behaviour.
The moment one experiences the consequences of such discrimination, things get far more real than an absurd conclusion you are alluding to.
No, I posit that you are taking the quote and extrapolating your own conclusion. The report does not make the same conclusions you are making. All it seems to be saying is that labour costs are different between countries.
All it seems to be saying is that labour costs are different between countries.
Would you think, that this evaluation scheme has any consequences to peer evaluation within the same group? [aside from the inherent bias, not even mentioning all the other loopholes with categorisations such as Scottish (read western) MC-EST/PhD etc.]
It is not from the ATLAS experiment, but of another LHC experiment -- but still within the organisation of CERN.
The comment/quote is about evaluating people -- ie. by the simplest budgeting metric, labour cost (with obvious consequences for peer evaluation). It has nothing to do with colour but with peer evaluation of equivalent work, differentiated according eastern or western membership.
edit: unable to reply to the comment below. But I can cite concrete case(s) in which work was performed in Geneva by both eastern and western member within the same group. China isn't a memberstate anyway.
Uh, it says labour costs in different countries. I.e. if the work is done in China then wages may be lower, if it's done in Switzerland then wages will be higher.
There's no big conspiracy here by CERN, just different wages in different countries.
IANAPhysicist, but I'd be interested to know how strict the 5-sigma discovery rule is considered - for example, could they still get a Nobel prize for a 4.9σ announcement? I suppose it's not that big a deal - the LHC is still running, and I'm sure they'll have enough data for a true 5σ announcement soon. Regardless, hats off to all involved, it must be exciting to be at the forefront of human knowledge :)