Hacker News new | past | comments | ask | show | jobs | submit | spxtr's comments login

Pretty funny.

Having taught low-temperature condensed matter labs, a big part of the grade is figuring out what went wrong, and either correcting for it, or at least acknowledging that it went wrong. The student needed to give more information about the experimental setup (what instruments did they use? four point or two point resistance? resistivity vs resistance? what is R_0?) and why they think the experiment didn't work. It looks to me like they had something miswired, so they only measured noise.


My understanding of undergrad lab work is that the point is not to get results, it's to learn how to do lab work.


Usually on a Friday afternoon/early evening while other non-Science/Engineering types are out having something called a Social Life ...


Indeed, a scientist should have some exposure to experimentation. Experiments don't always work on the first try, and often require some knowledge and skill.

The ones who can't solder go into electrical engineering and sit at a computer terminal all day. (joking of course)


Do you disagree with ET Jaynes then?


You'll have to be more specific: what are you asking me whether I disagree with?


I'm pretty sure the parent comment just dunked on you by demonstrating a deep well-read understanding of the underpinnings of both logic and statistics.

https://en.wikipedia.org/wiki/Edwin_Thompson_Jaynes

"Jaynes strongly promoted the interpretation of probability theory as an extension of logic. "


You're commenting on the wrong site (or just ignoring the rules & spirit of discussion) if you thought it necessary to tell someone they got "dunked on".


>> I'm pretty sure the parent comment just dunked on you

Oh, sorry, I didn't realise this was a dick-waving competition.


ET Jaynes has a book "Probability Theory: The Logic of Science". It's a nice book, and I was wondering if you had any thoughts on it.


I haven't read the book. What does it say?

On the matter of probability and science, I like how Karl Popper put it, although I can't find the text now so I must report it from memory. His point was that scientific hypothesis formation is an instance of inductive generalisation while probabilistic inference is a form of abductive reasoning, and so using probability to support an inductively derived hypothesis is basically supporting a guess, with another guess.

Statistics of course is not the same as probability. Personally I think statistics is a bunch of hooey.


Superconductivity in TBG was originally sold as "unconventional". This article reaffirms that claim by showing how it cannot be a BCS superconductor, and more. Very interesting.

It's worth reiterating that while graphene can have some niche uses in "the real world", the main reason that it is so highly prized within academia is that it is a superb platform for studying fundamental physics, as in this work. Maybe in the future this will lead to room-temperature superconductors or something along those lines. Maybe not. Nobody jokes about how the Higgs boson has failed to leave the lab.


Offtopic, personal bugbear on a grumpy early morning:

I have a physics degree, which is true of only a minority of HN users, and I have no idea what TBG and BCS stand for here. Using abbreviations when communicating with audiences that can't be expected to know them wastes everyone else's time to save you seconds of typing.

Edit: TBG - Twisted Bilayer Graphene, BCS - Bardeen–Cooper–Schrieffer.


These sort of abbreviations can be basically impossible to figure out for the rest of us. Thank you.


Me too! :-)


Whenever I read about bilayer twisted graphene or topological insulators, I can’t help thinking that these are going to be the basis for next-gen transistors. Of I totally agree that understanding nature is its own goal, please raise my taxes for graphene research!


They won't, but MoS_2 very well could be.


Well if not it's still a nice bike chainwax additive! (/s/2)


The other problem with this approach is that it is limited to 50% efficiency long-term. They claim "up to 90%", but it is only 90% efficient immediately after switching directions. The efficiency then drops to 0% before switching modes again.


Matt Yankowitz' "Tuning superconductivity in magic-angle graphene" shows how hydrostatic pressure affects TBG magic.


I was measuring a delicate electronic device at MagLab in Tallahassee. They warned me that the best data would be had at night, because of reduced noise from a nearby radio station. Precisely at 8 PM every night my data became noticably sharper.


Particle physics does not include that. There is no evidence that superconductivity requires any physics outside of the standard model.


Lovely article.

> Creating a working device typically takes them dozens of tries. And even then, each device behaves differently, so specific experiments are almost impossible to repeat.

This is frustrating. You can make two twisted bilayer graphene samples at 1.10 degrees precisely (to within 0.01 degrees), and they will show completely different phase diagrams. One will superconduct, but the other will not. Things like that.

What I learned recently is that every transport paper's twist angle report is wrong. The two hypothetical samples are actually probably not both 1.10 degrees. The uncertainty in twist angle should be of order 10-20%, rather than <1%. I even made this same mistake in my own paper last year!

When creating these TBG samples, we used to literally tear the graphene in half, to get accurate relative alignment of the two halves. It was very clever, but it imparts a huge amount of strain to the two layers, generally of order 0.1-0.3%. This seems like a small amount, but moire patterns are extremely sensitive to this (roughly strain amount divided by twist angle, but the twist angle is very small), so the unit cell area gets modified by anywhere from 5-30%. In transport measurements, we can only measure moire unit cell area, but not twist angle. The number 1.10 +\- 0.01 deg is calculated assuming no strain, and this is an incorrect assumption. An STM paper from 2019 first pointed this out, but it was just a couple sentences buried in the supplemental material, and I (and most others) completely missed it.

Even four years after moire materials took over the condensed matter world, we still don't understand the basics of how the materials work. It's very exciting, hot stuff.


Excuse the simpleton question:

> When creating these TBG samples, we used to literally tear the graphene in half, to get accurate relative alignment of the two halves. It was very clever, but it imparts a huge amount of strain to the two layers, generally of order 0.1-0.3%.

Does "it" mean the mechanical tearing of the crystal imparts the strain? or instead is it the newly introduced surface boundary (in 1D) that is imparting strain?

[I ask because long ago I was familiar with some of the crazy surface physics that would happen in IV-IV and III-V systems, and just wondering what effects the 1D termination of the 2D lattice might cause.]


The mechanical tearing imparts the strain. Probably. Nobody really knows that for sure.

These days, common practice is to cut the graphene with an AFM or laser prior to stacking.


> One will superconduct, but the other will not. Things like that.

What if you make one and cut it into two equal halves?


When they mention tearing them in half, I’d imagine this more closely resembles what we would think of as slicing and just has the tearing effect due to the size of the material


You can probe different areas of the same device by adding many electrical probes, usually in a geometry called a Hall bar. In the old days of TBG, the different regions of the same device would do wildly different things. These days we are much better at stacking, and the different regions of the same device will be mostly the same.


That description reminded me of the observer effect.

The act of measuring changes what was measured.


Which 2019 STM paper are you referring to?



Thanks. All the good stuff is always in the supplemental.


The article is from 2016, before she started her YouTube channel.


I'm probably misreading this, but I see "In this study, age ≥65 years, immunosuppression, diabetes, and chronic kidney, cardiac, pulmonary, neurologic, and liver disease were associated with higher odds for severe COVID-19 outcomes;" listed as the eight risk factors. Where are you seeing the ones you listed?


It comes from their full comorbidity tracking dataset: https://www.cdc.gov/nchs/data/health_policy/covid19-comorbid...

Some of these aren't common comorbidities, but they nonetheless factor in to the "died with N comorbidities" averages


Good news! The CDC does not identify every comorbidity as a risk factor. I’m glad they provide the data so you can make a more nuanced decision on your own.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: