Hacker News new | past | comments | ask | show | jobs | submit login
We don't know why lithium batteries work (shkrobius.livejournal.com)
269 points by mike_esspe on Jan 27, 2013 | hide | past | favorite | 105 comments



I like the side-point this author makes. Anytime there's a new discovery in quantum theory or some other escoteric, sexy area of research, thousands of armchair scientist knowitalls fall all over themselves to decry or incorrectly interpret what's been found.

Ask a basic question on physics, biology, or engineering and only an actual scientist will pipe up. I blame science fiction novels.


To be fair, if there's a new discovery related to evolution, you are likely to get non-scientists to respond. Granted, those will be people who argue that Darwinism is an evil, false influence in the world, but biology still has its own knowitalls.

Or, look at controversial topics in other domains of science: AIDS was created as a biological weapon, vaccines cause autism. Fluoridated water is a Communist plot to "deplete the brainpower and sap the strength of a generation of American children." The WTC could not have fallen because of planes crashing into it, and must have been leveled due to planned explosives.

This holds even outside of science. Ask a question on copyright laws and a bunch of people with no formal law training (including me) will chime in - leading to complete ripped movies on YouTube prefixed with the magic phrase "fair use."


You forgot "JFK was not killed by Oswald" and "We did not go on the Moon". Apart from that, you should avoid putting ridiculous conspiracy theories next to more believable ones. The fact that governments can and do stuff is secret and cover it up is hardly a "theory" anymore, wikileaks should ring some bells. Always believing what is written in the History Books sponsored by government oversight does not mean you are being fed the truth. Everyone has their own agenda.


I figured that the non-controversial controversies were easier to use as a counter-example.


> Always believing what is written in the History Books sponsored by government oversight does not mean you are being fed the truth.

The alternative to this is not believing whatever Alex Jones and NaturalNews.com say.

In fact, this false dichotomy is amazingly common among Internet conspiracy theory believers: Unless you fall in lockstep with them, you must believe everything Fox News tells you. No middle ground. (Or, of course, they accuse you of being a COINTELPRO Zionist shill, which is really a more extreme version of the same fallacy.)

> you should avoid putting ridiculous conspiracy theories next to more believable ones

Which theories, out of that list, do you think are believable?


I do not believe JFK was killed by Oswald, for example, and I believe there are ample doubts about the official story of what happened in Dallas. And archives being locked until sometimes in the future, who knows when we will finally know about the truth.

There were obviously no WMDs in Iraq even though it was used by several governments as a reason to go to war, so there are many cases in History where we are blatantly lied to and we might as well recognize that and learn from it.

As for the rest, unless you are sufficiently knowledgeable about a particular subject, just saying "I do not know enough to have an opinion about it" would be more honest than putting every idea in the bag with stinky ones.


>I do not believe JFK was killed by Oswald, for example, and I believe there are ample doubts about the official story of what happened in Dallas. And archives being locked until sometimes in the future, who knows when we will finally know about the truth.

For most of my life I believed Oswald wasn't the killer. Then I found a web site that was pointing out all the problems with the Oliver Stone film and it completely convinced me that Oswald killed the president and acted completely alone (I'm not going to look for it, but this should be enough info to go on).

There are problems with the official story and their are things that were locked up in the archives. But all those problems come from the Kennedys not anywhere else. The reason was that the Kennedys were trying to hide JFK's numerous medical problems (e.g. his horrible case of gonorrhea that a doctor had been unsuccessfully treating for years). Some of which he had openly lied about during his campaign and that info coming out could have wrecked Robert's political future as well. So the family took steps to limit the effectiveness of the autopsy (it was clear how he died anyway, right?) opening the door for all the crazy conspiracy theories surrounding the assassination.

I saw that elsewhere you mentioned the "back and to the left" thing. This is explained on the site as well. If you watch closely you can see that his wife is holding his suit jacket and his head does jerk forward initially but after hitting resistance (her hand holding his suit) it goes back the other way.


The WMD episode actually shows just how utterly crap governments can be at handling real conspiracies. They never came up with any actually persuasive evidence of WMDs before the invasion and never came up with any evidence they'd ever existed after the invasion. The whole thing was a non-starter from start to finish.

If the US government had come up with credible evidence of WMDs before, and then showed off WMDs that they had found in Iraq, and then it had been shown that these were all fake, then that would demonstrate that governments are good at this stuff and can manage realistic and persuasive conspiracies. As it is, the WMD incident shows that when the US government does try to stage a real faked conspiracy, they fail at it from start to finish.


>>>> There were obviously no WMDs in Iraq

Only question I'd have about that is what Iraq declared to OPCW in 2009? http://www.opcw.org/news/article/status-of-chemical-demilita...

It says "chemical weapons" and I thought chemical weapons are WMDs...


> I do not believe JFK was killed by Oswald, for example, and I believe there are ample doubts about the official story of what happened in Dallas.

Doubts like what? Do you believe foreign government agents killed him? Do you believe domestic government agents killed him? Do you believe he's still alive?

Just saying you have questions doesn't cast doubt on any theories. Evidence casts doubt on theories.

> There were obviously no WMDs in Iraq

This doesn't qualify as a conspiracy theory because the cat was essentially out of the bag from the beginning.

> just saying "I do not know enough to have an opinion about it"

Except I can do research, which conspiracy theory believers seem incapable of.


> Just saying you have questions doesn't cast doubt on any theories. Evidence casts doubt on theories.

The Zapruder film clearly shows Kennedy being hit from the front when approaching the grassy knoll. That's the evidence I am talking about. And when the official government story has a single magic bullet making 7 injuries by going in all crazy directions, then I wonder which one is really the conspiracy theory in the end? You are not the only one capable of doing research.

> This doesn't qualify as a conspiracy theory because the cat was essentially out of the bag from the beginning.

Well it was a conspiracy theory that become true, actually. It's just that there were too many people who did not believe the lie and did not want to walk into it, and foreign powers involved who did not want to be trapped in another war. But the US and UK governments did try to fool people with false evidence as long as they could.

> conspiracy theory believers

Classic fallacy. It's not because some believers are morons that all believers are. And the other fallacy is the put all the theories in the same bag no matter how much research is done on them : some are obviously ridiculous, some other deserve more attention. It's too easy to dismiss everything.


> The Zapruder film clearly shows Kennedy being hit from the front when approaching the grassy knoll.

No, it doesn't.

http://karws.gso.uri.edu/jfk/issues_and_evidence/zapruder_fi...

> And when the official government story has a single magic bullet making 7 injuries by going in all crazy directions

Another lie:

http://mcadams.posc.mu.edu/sbt.htm

> Well it was a conspiracy theory that become true, actually.

You're confusing the issue by bringing up an irrelevant matter. The phrase 'conspiracy theory' has a definite definition in English and it isn't just a theory that involves a conspiracy. It's a theory about a massive conspiracy involving many people over a long span of time. Words (and a noun phrase is a 'word') have meanings based on usage, not logic.


This is clearly becoming out of topic, but there's not only the Zapruder film that attests of the front shot. Numerous witnesses saw gunfire coming from behind the fence and heard shots coming from different areas. Do you mean to say they are all part of conspiracy fools ? And if that was not enough, Oswald being able to do such a shot was not very likely at best: he was never known to be good shot in the army, and the rifle he supposedly used could not allow rapid-fire such as shown in the Warren Report. And he would have been firing perfect shots through dense tree leaves.

Yeah, the official version make total sense, for sure. Oswald himself said he was a patsy - why is that not considered a possibility ?


I don't like to get involved, but..

>Numerous witnesses saw gunfire coming from behind the fence and heard shots coming from different areas

Eye witnesses are beyond useless at everything. Gunshots echo. No one 'saw' gunfire. Everyone filled in their own blanks.

For example, big aircraft crash at an English airshow. Of the hundreds of people asked what they saw only one remembered correctly. There have also been more scientific studies into this. (eg something about a video of a speeding car, some participants were told there was a barn in the video, there was not. 3 days later when asked about the barn a significantly higher % of people who had been told there was a barn said they saw one.)

Anyway. I have no opinion on the assassination - but if I was trying to make an argument for anything the last thing I would take as true is the statements of witnesses.


> Oswald being able to do such a shot was not very likely at best

The man was a Marine. He was a trained rifleman making a shot that, for him, was close range.

> he would have been firing perfect shots

No. He missed once.

You claim to do research, and to know things, but you get basic facts wrong. This is a common thing to find among conspiracy theorists: Their conspiracies rest on factual inaccuracies.


I just realized I get an internet point every time this thread gets a reply. ~Keep going!~


You sure know your conspiracy theories.


> You sure know your conspiracy theories.

Some believe them, others collect them.


I think the big difference here is that quantum theory and string theory hold enormous implications for the nature of reality. Understanding these concepts, even on a basic level, ignites in us a sort of spiritual fascination with the universe. That's why they are sexy. It has nothing to do with science fiction novels. It's just that understanding the mechanics of a vacuum cleaner doesn't leave us in awe of the universe.


I think you're right about motivation, but that's about ego and fame within a social hierarchy. Someone who loves understanding for its own sake also loves understanding the little bits - and they are better at it, because their attention is not divided, and they are open to serendipity. One can't predict where understanding will come from e.g. the motion of a spinning plate.

One could only be sure that understanding a vacuum cleaner is unrelated to understanding the universe in hindsight.


There are two issues at stake here. The curious layman would, I argue, rather study the vacuum cleaner than quantum mechanics, if only because the former offers a well-understood topic to study, whereas the latter offers hostile theories and confusion. On the other hand, studying sci-fi physics sheds some light into what's coming to our world in the near term, and what is in flux. This is obviously valuable to a different sort of curious layman, one who is less interested in how everything works and more interested in refining their world view.


It's like reverse bike shedding. Bike sheds have proved too boring for anyone to actually care about their color anymore.


Online, perhaps. But get involved in meatspace and bikeshedding is flourishing like always.


> I blame science fiction novels.

I personally blame self-help literature that borrows scientific terms to construct a fantasy magical world where their readers can actually help themselves by [insert stupid idea here].

And I cringe when someone says something represents a "quantum leap" from something else.


>Ask a basic question on physics, biology, or engineering and only an actual scientist will pipe up.

Evolution/creationist crackpots?


Doesn't this apply to all of science?

I mean we don't actually know why anything really works, but we have some models which fit the observations at the moment. This is just a very shallow model i.e. "we poked it and it worked".


Exactly!

We know very little. The same goes for the atomic model and a lot of other things.

Electrons, protons, quarks? That's our model. We know the equations, we know how to predict a lot of stuff.

This guarantees nothing. Maybe tomorrow someone comes up with a different theory that explains things better but doesn't say anything about waves or particles. Who knows.

Quick example: what's a magnetic field? It is a measurable quantity, there are theories about how forces are exchanged between charged particles, but essentially it is more a mathematical aid than anything else.

And physics is the most 'hard' science. Easily reproducible, the same everywhere in the universe (as far as we know), experiments can be reproduced as many times as you want without interference.

Now if you go to more specific sciences (biology, medicine, etc) it gets worse. They are only valid on planet Earth for a short period of time, for a start. Test subjects all have differences and experimenting on them affects the results.


  This guarantees nothing.
It guarantees a whole lot of things. When necessary, you bet your life on these mere 'models'. These 'models' cause you to trust stepping into an airplane and stepping out of the trajectory of a falling rock.

  We know very little.
We know a lot about these models. We investigate where they hold and where they don't. We don't just know they are valid on planet Earth for a short period of time: we investigate whether they also hold elsewhere and held in the past and have every reason to believe they did and do.

To move on to some philosophical background behind your assertion: the idea that there is some 'real', 'objective', 'true' reality that is modeled by these models, is a pre-Kantian point of view. A post-Kantian point of view is that these models of how the universe works are our reality. They are the only thing we can possibly have knowledge of: the models we use to categorize and structure our observations.

Most of them only exist in our brains. They got there by nature and nurture, without us usually spelling them out. They govern our daily behavior. We've tried to formalize the more complex ones in language that is independent of individual humans.

Sometimes observations are not consistent with a model, so we revise the model. Still we only know the new version of the model, which describes what we expect to observe.


Of course, I don't mean 'guarantees nothing' in the sense of "planes won't fly" or things like this.

These are the results that are verified. The math

What we don't know about is if our model (and not the math or the results) has anything to do with "reality" (and that may not even be possible)

Take the particle/wave duality for example. We're talking about an electron being 'a particle' but sometimes 'a wave'. Of course, we can predict things about the behaviour of this electron, but the story behind it, of particles, waves, may not be accurate, and this may be replaced by something that completely does away with electrons (difficult to imagine since electrons have mass) but comes with similar predictions.


For all we know, we may never "describe" reality with mathematics, but it's purpose is nevertheless to estimate and predict to a certain degree of precision an event within its boundary conditions. In this sense, science and mathematics are not one and the same. The scientific method is designed to push these mathematical models past their limits so that we may get closer to discovering what this "reality" really is. These models are how we know when something doesn't fit with our scientific understanding, it is nothing more than a feedback loop.

Edit: hnhg: You're right, rephrased properly :)


I think the parent is making the same point as you are.


>A post-Kantian point of view is that these models of how the universe works are our reality.

Personally, I hate this view. There are actual facts, how things work, and there are our best educated guesses about what we're observing. One of these things is what I would call a scientific fact, the other is our best educated guess (and I don't mean that as negative as it sounds; we can do no better and may never be able to).

I'm fine with saying "this is the best we can do so we have to go with it". I'm not fine with saying "this is the best we can do so this is simply the fact itself" as this leads to ridiculous situations like saying "X is a fact" when in reality there are more questions than answers about X.


This is the very core of our scientific method (critical rationalism, Karl Popper [1]), theoretical models can only be used in science when they can be falsified. That is why we call them hypothesis (from the Greek word for assumption [2]). Logically, no number of positive outcomes at the level of experimental testing can confirm a scientific theory, but a single counterexample is logically decisive.

For example, we can observe a thousand white swans in the world, and hypothesize that all swans are white. That is until we encounter that one black swan, and our hypothesis has been falsified.

[1] http://en.wikipedia.org/wiki/Karl_Popper [2] http://en.wikipedia.org/wiki/Hypothesis


No, the black swan is a figment of your imagination. It wasn't a swan. You didn't know what you were looking at. You made it up. I don't care that lots of people saw it, you're all delusional. You're crackpots.

The problem with the scientific method is that scientists become very attached to their hypotheses, and the longer a hypothesis stands the more attached scientists get. This causes them to reject contradictory evidence, often going to ridiculous lengths to do so. A change of the hypotheses, a scientific revolution, only happens on the fringes of science when a maverick persists in examining evidence that most scientists reject.


I would recommend reading "The Structure of Scientific Revolutions" by Thomas S. Kuhn.

Scientific revolutions are far more widespread and involve many more scientists than you may think. Even during the earliest stages of the Copernican revolution there were many scientists who ascribed to the ideals. You may only hear about Copernicus, Darwin, Maxwell, and a few other notable scientists but the revolutions included many many people. Some of these people may have irrationally stuck to their previous hypotheses at each of these transitions but that is GOOD. Skepticism in science is critical to its function. If the skepticism is not entirely misplaced or competing theories still have merit for exploration, it will not die out (It's more complicated than this, but for the most part holds true).

As for the climate change in reply to this posts' parent: go on scholar.google.com and find real peer reviewed papers on climate change or go find the high impact environmental journals and often cited papers. Then find the papers that cite those papers that contain actual data!

Scientists have gathered a massive amount of data on melting ice, carbon dioxide concentrations, and world wide pollution. The jury is still out on whether or not humans are a major contributor to carbon dioxide emissions but it is irrelevant when atmospheric CO2 concentrations and temperatures are increasing. This may simply be a natural cycle but it is wise to err on the side of caution until we know more about the equilibrium conditions of the system.

The more troubling problem is pollution and deforestation that reduces the number of oxygen producing organisms. Even after ice ages and mass extinction events, the most important organisms for oxygen production in our atmosphere have been phytoplankton, which currently account for probably about half of all of the oxygen produced by plants. Already there is evidence pollution is drastically reducing these populations [1][2]. If this is proven to be true, there is unlikely to be some other mechanism on Earth to replace the phytoplankton in oxygen production (unless we force the evolution of a phytoplankton species immune to industrial pollution...)

Edit: [1] http://www.nature.com/nature/journal/v466/n7306/full/nature0...

[2] http://www.nature.com/nature/journal/v444/n7120/full/nature0...

I apologize that these papers are behind a paywall, but no news article for the public can do them justice. When you jump into the data for both papers you see science in action: a complex web of variables and events that we are trying to understand.


Why is it so hard to build an efficient artificial solar powered CO2 + H20 -> O2 + carbohydrate machine?


The mechanisms for converting water and atmospheric CO2 into oxygen evolved over a billion+ years before the Earth's atmosphere even supported aerobic organisms on land with every variable crucial to the survival of producer organisms painfully optimized. I don't think we even have a good quantum mechanical description of chlorophyll and its electron transport chain, which might have ridiculously high efficiencies compared to our solar panels (I think I ready this somewhere?). Once you add the chemical pathways for taking the energy and storing it with CO2/H2O, you increase the complexity many orders of magnitude. Even if we can replicate the pathway, scaling it up to actually impact the CO2/O2 ratios would be both economically and technologically difficult.


it's hard to beat the cost/efficiency ratio of trees


Prime example: climate change.

We have precisely fuck all idea what is going on regardless of the models.

Proof of this can be found by examining all the climate change news articles based on scientific papers going back to the impending ice age in 1972 (people have a short memory for climatology theories).


"Proof of this can be found by examining all the climate change news articles based on scientific papers going back to the impending ice age in 1972 (people have a short memory for climatology theories)."

While it makes a good talking point, it was not the case.

http://www.skepticalscience.com/ice-age-predictions-in-1970s...


News articles can not be relied on to present science and the views of scientists accurately. Controversy is news, consensus is not. Very few publications care enough to get their science reporting right, and it's expensive to have people on staff who are capable of doing it.


Relevant Asimov's classic article: http://chem.tufts.edu/answersinscience/relativityofwrong.htm

YMMV.


Thank you, you explained better an beyond the usual repetition

There's so much of this it's not even funny. The last part of my post is in this gist.

But in physics this is easier to work around. In other areas not so much

Also, the scientific method and Popper's ideas are a philosophical construct. One has to think: who watches the watchman?


The scientific method's own error correcting mechanisms also operate on itself. Our understanding of epistemology, repeatability, and falsifiability have evolved alongside and as a result of the scientific method as we have journeyed through mechanics, electromagnetism, statistical physics, and quantum mechanics and relativity. Popper's own seminal work was due to what we have learned from applying the scientific method and on a meta level substantially changed what "science" codified for many of its practitioners.

However, even the need for falsifiability is sometimes suspended (although not lightly and with much criticism) when there is something interesting to pursue like in string theory, where we are operating on energy levels we may never be able to experimentally reach.


Yes, there are some self correcting mechanisms. And of course, sometimes the problem is not in the scientific method but in the people working with it.

This discussion can go on and on, but I'd like to point out the scientific method is not applicable to several things.

String theory like you pointed and other things still in the realm of speculation are one thing (even though their foundation is scientific work)

Getting the best page for 'Cat pictures' on Google is another thing. Or translating a phrase to Japanese.

The success of the scientific method in Physics is due in a big part to the issues I pointed out, physics is constant. The LHC repeats collision experiments thousands of times, so you can get a result with as little variance as possible.

However, it is not possible to test a new drug in millions of people and see what happens, not to mention the results are going to be different for each person, so the best you can have is a statistical answer.


What is the problem to begin with? Science is not some epistemologically omnipotent entity. It is an integral part of human culture and can only be expected to evolve and behave as such. The scientific method is a guiding principle agreed on by its practitioners but that is all that sets it apart from the rest of culture.

The scientific method does work for string theory just like it worked for relativity and other 20th century theories that took a while for experimental confirmation. String theory got a lot of people very excited a few decades ago and so it stuck, just like an artistic or fashion style sticks to a generation or civilization. It is still in the very primal stages, like Newton in the time after the apple metaphorically hit his head, as he was using Kepler's laws to derive classical gravity. This time the math is way more complicated and will take way more time (and likely classical/quantum computational resources) to turn into a theory ready for experimental falsification. Whether or not it truly holds promise or is just a "fad" is irrelevant; the only vector for the evolution science is purely human.

"Best page" is purely subjective. If you draw the analogy between recommendation algorithms and scientific theories, then the person would be the equipment. If you calibrate properly, you should be able to switch out equivalent pieces of equipment and get the same experimental conclusion based on the scientific theory. You can't "calibrate" a person so the primary feedback loop on a recommendation algorithm changes with every user. Until you get down to human psychology and neuroscience, there is really no basis for this problem in what we know of as "science."

As for translation, I'm sure our research of the common threads of language have contributed to translation algorithms (although I have not researched this topic).

Your last point about physics and medicine is moot. Our very basis for "fact" in science is the statistical results of a vast number of experiments. You say that statistics is "the best you can have" as though there is some golden standard to which you are comparing the scientific method to. There is no better golden standard than the statistics naturally built into the science. For all of its imperfections, it is the absolute best method we have for knowing (approximate) truth in a nontrivial way (aka, truth not as we see it in our mind but truth that can be experienced and repeated by the vast majority of humanity given the tools).

Not long ago, we didn't even have a concept of molecules that would act as drugs to impact our health, let alone simulate them in live and computational models (which is growing more and more commonplace) to know exactly how a person's unique biochemistry will react. We will probably always have statistical answers, unless there is some mechanism for nature to reveal "truth" to us. The whole point of science is that those models get more and more accurate over time.


> who watches the watchman?

There's this paranoid viewpoint that Science is this exclusive club shrouded in secrecy, a little like the Masons. They hold on to their established views despite conflicting evidence from the fringes, thus continuing their tenures, fat paychecks, international fame and ultimately control over the world.

There is no watchman. There are scientists. Scientists watch each other with a tenacity that could almost be seen as dogmatic, but really it's because errors help no-one. To become a scientist you do science. If you have a concern about a particular field of science, study up so that you're as knowledgable as those in that field and discourse. That's it!


But science is an (expensive)human endeavor. Tenure committees, grant committees and budget administrators are in a position to elect which paths of research are selected.

Most of the time this is a very good thing as it prevent limited resources from being wasted, but bureaucracies can suffer from groupthink and senior scientists are often reluctant to undermine their career's work. However, the system we have is probably among the best possible. We cannot avoid the problem of the purse strings.


There is a lot of politics in the process of funding science and I would argue the system is quite broken, but it still operates within the scientific community. There may be a new class of hybrid scientists-bureaucrats (aka professors) that have a strong grasp on research direction but they are still scientists. Each individual may have their biases but the sum of all of the scientists is made up of so many different people that the only common link between is their belief in the scientific method and this collective ultimately guides its progress.

Also, I have to nit pick, science is by no means an expensive human endeavor. The net global societal benefit per dollar spent, even with all of the added bureaucracy and other overhead, far outweighs other spending, maybe even spending on education. The problem is that science is long term, something few seem to have the stomach for.


> Tenure committees, grant committees and budget administrators are in a position to elect which paths of research are selected.

True, but their work is still subject to external scrutiny and you don't need tenure to do that.


From my experience in the research department of my university you are quite mistaken.

"External scrutiny" usually means "publish N papers per year and don't get into trouble". Hence the most conservative projects get funding (also because they're more likely to be successful - or better said, easier to do)

And by the way, the majority of the work is done by the graduate/PhD students, and of course their advisor's name is on every paper they publish.


No, external scrutiny means that the quality of the papers are vetted by scientists the world over. Once the papers are published they are read, reviewed, checked and, especially for important stuff, the experiments are repeated. It still has to be good science.

Sure some interesting stuff is overlooked, but any research is progress. If your university chooses to be conservative, that just means less risk on their side, but also less chance of a major breakthrough, press, status and patent rights. It's the road they've chosen but not the only road.

And everyone knows that the grad students are doing the actual grunt work. Outside of academia there are also bosses whose function seems to be solely to grab credit of those doing the actual work (but more often this is just a myopic view of what their actual job entails).


Elegantly described!


I agree with you. When ever someone argues that we know a lot about the world and how it works, the following quote comes to my mind:

"The universe is not only stranger than we imagine, it is stranger than we can imagine"

J. B. S. Haldane


Watch it. Next thing you know you'll admit that we don't really "know" anything and get into the realm of metaphysics! But, I agree. In fact, the fallibility of science should be something taught in elementary school along with the scientific principle. It is already taught that science helped us get from "the sun revolves around the earth" to "the earth revolves around the sun", but we are not taught about differences in outcomes of valid scientific studies proving AND disproving global warming and what that really means, which is that science is biased, flawed, and does NOT elucidate truth, but it is helpful and practical in many ways and produces what is currently accepted as "fact".

It is important to have a critical mind when it comes to science and understand that faith doesn't just reside in religion- atheists and those that claim that science have the answer to everything often truly believe that science is a constant or that it "evolves" as some sort of mystical truth that will lead us to enlightenment, but, practically speaking, science is always incorrect. We don't have all of the answers and we never will. Don't get me wrong- science has made huge advancements in our civilization and almost everything we interact with has been produced in part or has been affected by science, often in a good way but not always. However, science is not and never should be a religion. You can use scientific principles and "facts", but if you call it science, do not be beholden to those "facts".

This does not mean that we should give up exploration or research at all. But by accepting the wonder that is everything we don't and will never completely understand, we are all scientists.


No faith needed, at least for a philosophically grounded scientist. We are never going to get at some essential true nature through science. What we can do is expand our reality and capabilities in the world by creating useful models, and creating new models when the old ones are no longer sufficient.

Almost every person, scientist or no, has personal beliefs in addition to understanding of some of the scientific models we use. The two shouldn't be conflated though.


The fallibility of thought and observation should be taught over "science", because otherwise we'll have even more obnoxious pseudoskeptics than we do today, who think that yeah, what does "science" know anyway? Then proceed on with their folksy but ungrounded, still harmful beliefs.


But people associate "the results of the study on..." with science and fact (truth), rather that is possibly evidence-based but may be wrong because it is biased, misguided, or incomplete. People assume that there is such a thing as fact produced by science, and when they learn this fact from a teacher, friend, the media, etc., then they believe these things are true, and that they will not change. That is not science.

Using unhelpful skepticism as a reason for hiding the fallibility of science will produce more generations of people that believe whatever the scientists tell them, which at times will be wrong and yet accepted by almost everyone as truth.


This is a problem far more deeply engrained than just the public's perception of science.

I think the primary problem is the inability for most people to self reflect on what they know, and most importantly, whether they even know it at all, to critically evaluate their assumptions about the universe and the extra meta level of how they came to those assumptions. It is painful to mercilessly question yourself and your entire belief system and in today's world it is probably harder than ever to build a coherent picture of reality due to increasing noise and complexity of ideas.


"Using unhelpful skepticism as a reason for hiding the fallibility of science"

I'm not suggesting that, I'm suggesting a de-emphasis on framing it as a matter of "science" and leaving folk wisdom and all other assumptions of perspective and "trusted sources" untouched.


If you think that, you've missed the point of the article.

We know how almost everything we use works -- in that we can connect the initial "shallow" models to more fundamental models of physics. Apparently this one has eluded us so far.


This is a philosophical stance that goes beyond the article's point. We have very good models for most of technology but we don't know enough about how Lithium batteries fit into those models. It's kind of tangential though. The point of the article seems to be that we ought to pay more attention to tangible things that we don't understand (like how to design more efficient batteries that would improve people's lives in real ways) then on pointless interpretations of string theory. I tend to think there is value in keeping in mind that we can't actually be certain of things because the alternatives seems quite pompous to me.


I think his point was that we don't even have a model for Li-Pol battery.


Something makes me suspect this article is missing part of the truth, similar to sites that claim we don't know how bees can fly (we do).


Yeah, this piece is a little misleading. The author is referring to a nuanced difference between ethylene carbonate (EC) and propylene carbonate (PC) in the formation of a protective film around anodes.

See this (paywalled) review of electrolyte chemistry: http://pubs.acs.org/doi/abs/10.1021/cr030203g http://www.ncbi.nlm.nih.gov/pubmed/15669157

""" The unique position of EC as a lithium battery electrolyte was established in 1990 when Dahn and co-workers reported the fundamental difference between EC and PC in their effects on the reversibility of lithium ion intercalation/deintercalation with graphitic anodes.36 Despite the seemingly minute difference in molecular structure between the two, EC was found to form an effective protective film (SEI) on a graphitic anode that prevented any sustained electrolyte decomposition on the anode, while this protection could not be realized with PC and the graphene structure eventually disintegrated in a process termed “exfoliation” because of PC cointercalation. The reason for the effectiveness of the SEI has incited a lot of research interest in the past decade but remains an unsolved mystery, although it is generally believed that EC undergoes a reduction process on the carbonaceous anode via a similar path to that shown in Scheme 1. Because of the important role this SEI plays in lithium ion chemistry, the research efforts on this topic will be reviewed in a dedicated section (section 6). """


So still, the main claim sort of stands - there are unsolved mysteries in every day's life, which we still don't understand fully, but that we use anyway.

(I know nothing about electrochamistry or batteries :( )


What do you think is misleading about the linked LJ post? The abstract you've quoted here seems to back up every word in it.


I agree. I've only high school electrochemistry to draw on, but questioning this sort of thing becomes much easier if we focus on the exact claim:

* When reduced, ethylene carbonate forms a dense material that lithium cations can pass through but solvent molecules can't.

* Propylene carbonate does not form this/similar material.

* Using propylene carbonate therefore allows the solvent to come into contact with and weaken the graphite anode.

* Nobody knows the chemical process that forms the material for ethylene carbonate but does not for propylene carbonate.



It took a while to understand the mechanism exactly, but it was always clear that it had to do with fluid turbulence, which is extremely hard to model.

The claim that "science says bees can't possibly fly" was always wrong, based on models for static wings that just don't apply, as everyone with an inch of competence knew all along.


Now find the one explaining why when you sit around a campfire, the smoke always comes right at you?


Some people attribute this to confirmation bias (and there is a reply here doing just that) but it is a real phenomenon due to several causes. The simplest is the fact that if you consider all of the directions the wind might blow when you are standing next to a fire you are blocking the wind that would blow the wind directly away from you. This gives a slight bias toward the smoke being blown toward you when you stand near a fire.

However, when you are standing "upwind" of the fire you don't simply block the wind, you create a recirculation zone of turbulent air in front of you. If you are close enough to a fire then this recirculation zone will be able to suck in the smoke from the fire, towards you.

Additionally, the heat of a fire will create a convective flow, with air flowing upward (due to the heat) and being replaced from all sides laterally. So even if there is no wind there will still be a convective airflow which gives rise to that turbulent recirculation zone in between you and the fire.

Thus you have a situation where only in the case where there is a wind blowing substantially sideways relative to the line between you and the fire will the smoke not be attracted to you if you are close enough to the fire.

This can be avoided simply by standing far enough away from the fire to avoid these various effects.


The explanation is welcome. The solution is not so useful, because often the fire is there to keep you warm. Standing further away may defeat the point entirely :)


You can make a bigger fire, or heat rocks in the fire and sit in front of the rocks, or burn the wood to charcoal under a big pile of dirt before using it for your fire, or pipe water through the fire and run it through a radiator, or insulate your tent, or build big thick adobe walls with enough thermal mass to keep your temperature reasonable, or use careful passive solar design to keep your entire house at a comfortable temperature all year round.

There are options that don't involve smelling like a campfire.


Confirmation bias, selection bias.


Because you suck all the air out of the room.


"“Science literacy” tests quiz the initiated on their command of abstract dogmas acquired through no exercise of one’s ability to generate knowledge."

This is something that often bothers me about the r/atheism crowd.

A young person encountering an idea like evolution for the first time should be extremely skeptical and require much convincing, because it is not an intuitive idea at all. Believing in it just because the teacher says you will get a bad grade on the test if you don't does nothing to inculcate scientific thinking in young people.

Now, students unwilling to engage with physical evidence obviously have a different problem. But with the level of discourse I see coming out of many proponents of atheism on the Internet, I often feel many of them are proud of their "command of abstract dogmas" and are not particularly people demonstrating the "ability to generate knowledge."


Personally, I thought evolution through natural selection was pretty intuitive and obvious, given the previous instruction on how DNA and chromosomes worked. If everything changed a little bit, and only the good changes stayed, then they'd all stack up and constitute a big change!

DNA and chromosomes were pretty unintuitive, but we have pictures of those, so that helped.

Also, I think Occam's razor applies even if you're a young person who hasn't heard of it yet: with no previous knowledge on the subject, I am wont to accept the simplest explanation that doesn't contradict reality without much skepticism.


Presuming everything will follow a linear path is simple.


> A young person encountering an idea like evolution for the first time should be extremely skeptical

Not really. Evolution follows directly from hereditarity and natural selection. We turned wolves into shih-tzus through artificial selection. From that point, it's natural to imagine what natural selection does to the population.

I often find useful to use a disadvantageous trait as an example. If all your family is, say, allergic to peanuts and all you have to feed your tribe is peanuts, your family will probably leave less descendants than those who can eat. With a couple generations, this trait will be eradicated.


> Still no one knows what chemical process yields this material, what is this material, and why a small structural change in the electrolyte makes such a colossal difference in the performance

I read this an immediately saw a parallel with computer science. To quote from CLRS: [1] "Computer scientists are intrigued by how a small change to the problem statement can cause a big change in the efficiency of the best known algorithm". It strikes me as incredible how seemingly simple concepts can be the tip of a much bigger iceberg of complexity. Another example of this would be the proof of Fermat's Last Theorem [2] . A proposition one can understand with primary school mathematics, whose proof eluded mathematicians for centuries and required the invention of new mathematics. Sir Andrew Wiles, the person who proved the proposition, had to see deep symmetries between a plethora of domains in mathematics. The main idea that I find remarkable is the incredible complexity of the world in which we live - even the smallest of changes to a concept we think we understand (Electrochemistry, Algorithms, Mathematics), can redirect the trajectory of our understanding completely. To me, this seems as if we have only the most superficial understanding of the myriad structures and substructures of the universe. New developments in all areas of science excite me greatly purely because our understanding has been advanced that infinitesimally bit more.

[1] http://en.wikipedia.org/wiki/Introduction_to_Algorithms [2] http://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem


Easy to claim in chemistry - where loads of research is done by simply trying everything and seeing what works. Its no longer interesting to understand why. Medicine, industrial processes, it doesn't matter. I read once that the last medicine designed by a chemist was the cure for syphilis - decades ago. Probably not true any more, but lots of Edison-style 'science' is being done still.


Producing, understanding, and testing the vast number of chemical interactions are vastly different fields of chemistry (and biology). There are many researchers working on understanding the basics of electrochemistry which can lead to a better understanding of this phenomena, even if they're not specifically looking at this example.

However, those are probably in the minority compared to these "Edison-style" scientists, as you call them, and for good reason. Phenomena not yet explained by scientific cannon are vastly outnumbered by the possible number of configurations and combinations of chemicals that we have yet to explore. We can't just dedicate every chemist in the world to the problem of understanding the fundamental laws of chemistry and hope to get anywhere. Science just doesn't work like that.

Identifying and testing chemicals that have applications in medicine or industry has massive impact all over the world and is easy relative to a fundamental understanding of nature. What use would science be if it was mostly dedicated to understanding, instead of applying its knowledge to the needs of its people?

Besides, what we can discover and learn about the natural world is often heavily limited by the progress of technology, especially in industry. The incredible proliferation of personal computers and the constant downward force on cost and energy consumption of CPUs, GPUs, and storage in the last two decades has made the LHC possible today. Imagine building a super computer 20 years ago just to store, let alone analyze, all of the data coming out of the collider's instruments. We may have lots of technology to take snapshots or do "traces" of chemical reactions but we are still quite limited in our ability to "see" live chemical reactions (and the methods require expensive equipment).


"I read once that the last medicine designed by a chemist was the cure for syphilis - decades ago."

http://www.vice.com/read/interview-with-ketamine-chemist-704...


"where loads of research is done by simply trying everything and seeing what works."

Do you seriously think research is "throwing shit against a wall and seeing what sticks"?

"I read once that the last medicine designed by a chemist was the cure for syphilis - decades ago"

Yet another folksy anecdote parroted by the same people who claim "the last disease we've cured was Polio". Saying these things does not make them true.


"Do you seriously think research is 'throwing shit against a wall and seeing what sticks'?"

That's exactly what most pre-clinical research is. Testing hundreds of thousands of different molecules in hopes of finding on that works. For every NME that makes it onto the market, there are several thousands or tens of thousands that are initially tested.


Theory dictates what you try out though .... What cure of syphilis is that? I though Antibiotics are the cure.


I always wondered why everything related to my computing devices seems to follow some variation of Moores law but the battery. This explains it.


I don't know if this is really the reason. Moore's Law started with planar photolithographic silicon semiconductor integrated circuit fabrication (aka making chips) around 1958. We're now in year 50 of planar photolithographic silicon semiconductor integrated circuit fabrication, although now we have environmental laws so we can't just dump hydrofluoric acid into the groundwater, and we have to use X-rays instead of ordinary light, and we worry about smaller dust, stuff like that which makes it a lot harder. But basically we've just been scaling stuff down. We have about another ten years before we reach the physical limits of planar photolithographic silicon semiconductor integrated circuit fabrication, which are about two orders of magnitude away. That means we started about 17 orders of magnitude away.

By contrast, we seem to have reached the fundamental limits of carbon-zinc batteries a couple of centuries ago, and the fundamental limits of ordinary electrolytic batteries in 1990 with the invention of the lithium battery. (Or maybe the zinc-air battery.) The fundamental limits on chemical energy density seem to be about two or three orders of magnitude from where we started, not 17.

In both cases it's possible to do better by using fundamentally different approaches: microturbines, tiny fuel cells, or betavoltaic batteries in the case of batteries; diamond, three-dimensional structures, and molecular assembly in the case of electronics. But all of these have substantial engineering obstacles.


Not an expert but I find this difficult to believe. We know a LOT about battery chemistry.


I think the author intuitively knew that "A slight difference in electrolyte composition results in dramatically different long term electrode reliability" would not get quite as many page views as the completely false "No one has the slightest idea how and why it works." Especially by carefully phrasing the article such that the "it" refers to a hyper detailed corner case but implies its the entirety of mankinds chemistry knowledge, or at least battery chemistry.

So three good PR lessons:

1) Dramatically exaggerate

2) Confuse pronouns... which "it" is the author writing about?

3) Appeal to ignorance, wave away all that science-y stuff in favor of awe and wonder.


I thought the article was fine.

1. We really have no idea what the molecules are doing in this protective layer. See the quote from an actual paper in semenko's post.

2. The it is the bit that makes lithium batteries not fall apart.

3. What the hell are you talking about with appeal to ignorance? The article is just explaining that there are gaps in our knowledge, basic science that we can measure, use, tweak but don't know the mechanism. There is no attempt to prove or disprove anything.


The author of the article is, however, an expert. In chemistry (among other things). And the fact that we know a lot about chemistry of some other batteries does not in any way makes it impossible that we know very little about the chemistry of this specific one.


It sounds like you're familiar with the author previously, and know them to be a polymath. What is their background?


We didn't know exactly why lots of pharmaceuticals worked either, and we still don't in some cases.


I also wonder about the "graphene sheets" - it's not the first time I read that on the description of a battery. Surely it's graphite, not graphene? I don't think batteries are built with one-atom-thick layers.


He means graphene that's incidentally always there in graphite ("in graphite in the anode"). We didn't coax it into sheets, it's just there and it's what makes things happen.


Yes the early batteries used graphite (graphene was already proposed as a theoretical possibility but not replicated in a lab setting until 2004).

However, with its discovery, graphene has been extensively researched for batteries and more is ongoing. Graphene is used in the manufacturing process for some lithium batteries where I assume it is used to build the graphite porous structure with fewer impurities and with better control of its composition. (total speculation).


Graphite is composed of multiple sheets of carbon atoms, while graphene is composed of a single such sheet. Referring to the sheets as "graphene sheets" is not very incorrect, but sounds much cooler.



Doesn't address the specific unknown described in the article, but yes, we do know how LIon batteries work, just not how the specific solvent used prevents the batteries rapidly degrading.


We may not always know how or why (at the deepest fundamental levels). We don't need to. We use science to know that it does.


a digression but the writer seems russian by virtue of his/her sentence construction. I can't really pin down the reason, perhaps a linguist can help me out? (Disclaimer: I'm not russian. Edit: By the comments on his page it looks it's actually a russian. Also the use of livejournal)


Related to not knowing why lithium works in the brain?


I read it in 12th :p




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: