I always say that important scientific discoveries arise from better tools. Once technology can make the tool, it doesn't take long for the discovery to be made. It may take a genius, but there are a lot of geniuses on Earth.
Without precision instruments, how can we know that light has a speed, that stars are not points of light on a crystal sphere surrounding the earth, that diseases are carried by microbes... General relativity was a way to solve anomalies in the orbit of Mercury, something only detectable with precise measurement. Without it, general relativity would make no sense: why use all these complex formulas when Newton gravity works just as well? Send Einstein's work to the 16th century and it will be slashed by Occam's razor.
The "tenth of a second" problem is just that, technology able to measure tenth of seconds opened new fields.
I don't mean that technology is more important than scientific research, but they go hand in hand, no good science without good observation, no good observation without good tech, and no good tech without good science.
> General relativity was a way to solve anomalies in the orbit of Mercury, something only detectable with precise measurement. Without it, general relativity would make no sense: why use all these complex formulas when Newton gravity works just as well? Send Einstein's work to the 16th century and it will be slashed by Occam's razor.
I don't think that's entirely true. As far as I know, the Mercury precession anomaly wasn't considered a serious problem to Newton's theory at the time.
Einstein was motivated by theoretical problems that arose from trying to reconcile Newton's theory with special relativity, which inspired him to take the principle of equivalence seriously: the notion that gravitational mass and inertial mass are fundamentally the same. This is what lead to the famous field equations. Their derivation is not some sort of reverse engineering of the Mercury precession anomaly. Someone wrote a few words about it here [1].
The fact that GR could explain the Mercury anomaly was a bonus. The break through, however, came with the first measurement of light being deflected by the sun during a solar eclipse.
>I always say that important scientific discoveries arise from better tools
I dunno, from Archimedes to Newton, and from Maxwell to Einstein, most important scientific discoveries arose from thinking breakthroughs (and led to better tools, as opposed to having been caused by them). The "better tools" at most helped verify it after it was expressed, not reach it.
It's the incremental, evolutionary stuff, that's mostly helped by better tools...
Actually he is correct, the phenomenon that experimental physics seems to be lagging theoretical physics is a quite recent one. It's also largely based on a small subset of fields that seem to get most of the attention. There are a lot of fields (such as nonlinear dynamic systems) where our theoretical understanding still significantly lags experimental observations.
You can think up "breakthroughs" all you want, it's easy. As any mathematician or software developer can confirm, it's easy for a human to propose an explanation and immediately convince themselves of its correctness; validating hypotheses is hard work that the brain desperately tries to avoid. And in physics, validating things on paper can only get you so far - you ultimately need experimental confirmation to make solid science.
Here is where technology becomes necessary: in order to favor one hypothesis over the other, you need to find where they give conflicting predictions, and then check what actually happens in reality. That's only possible when you can perform the necessary experiment and observe the results with sufficient precision to discriminate between competing explanations. Technology is what gives you this capacity, expanding the scope of experiments you can run.
As someone else mentioned, you could drop Einstein back a few hundred years and his work would be laughed off - because ultimately there would be no way to tell if he's right or not. The theory may look beautiful (if complex) on paper, but if you can't discriminate between it and alternatives through an experiment, it's meaningless and you may as well use the simplest model that fits all your current observations.
> The "better tools" at most helped verify it after it was expressed, not reach it.
While they do so, they validate the said scientific breakthroughs. Science is not only about coming up with models or hypotheses. Science is "theory and validation." Take out either of them, and you don't have science.
Humans have been theorizing and creating mental models for thousands of years. But only with the better tools were they able to select verified models, and discard those that didn't fit the observation, and as a consequence, make more and more sophisticated and accurate predictions.
To take two recent examples, the first photograph of the black hole and the first detection of gravitational waves (both predicted by Einstein's GR, coincidentally) required a stupendous amount of collective brainpower of brilliant engineers and scientists. 100% of that brainpower went into figuring out building those better tools and analyzing the collected data. The scientific community itself has acknowledged the criticality of those better tools, 2017 Nobel Prize in Physics was awarded for building a "better tool" to observe gravitational waves, LIGO [1].
This is not to say that theory isn't important but to call out that observational tools are not "at most help verify theory" but are an integral part of science.
There is, of course, a way to validate theory within its own framework and that is what mathematicians do. I guess the philosophy is a similar domain, but I have not read much about it, so I can't comment much.
On a related note, I highly recommend this panel discussion about the topic of GW, LIGO etc., from the horses' mouth. LIGO is absolutely mind-blowing, a testimony to collective human ingenuity [2].
hmmm, how about "no". It really boils down to how you define science. If you define science as "published in peer reviewed journals" or "knee-deep in academia", the statement is correct. If, however, it's knowledge based on observation of phenomena (literally what science means), preferably with understanding why things happen (having a theory), it's so wrong, it's not even funny - it's a very dangerous standpoint, because the alternative to the scientific method is magic thinking and cargo cults.
Engineering existed way before any kind of math or symbolic manipulation. It was a collection of heuristics.
Information theory came after people were already sending signals over the wire. Similarly, there were architectural feats before geometry or Newton’s laws were discovered.
was it not based on knowledge based on observation (science)? You don't need math and symbolic manipulation, not to mention information theory to do science, even if it helps immensely if you do use these tools!
But we're not talking about just abstract principles. By themselves, these abstract principles are just that - abstract. They only matter when applied to the concrete, when you use them to learn something new about reality.
Technology is how we extend the set of things to which these principles can be applied, and thus the set of things we can learn about our world.
in purely positivistic viewpoint you are probably right, however you seem to be taking inductive approach (use principles to learn) as opposed to the deductive one (use learnings to extract principles). I wouldn't actually dismiss any of them.
edit: re-read your comments again, and i see your point now. You are absolutely right.
I would say that really it's both, unresolved questions (which can come from experiment or theory) often spur development of new measurement technology. On the other hand new measurement capabilities often spur new scientific discoveries.
The best tools are typically at the bleeding edge of technology, as the tool generally has to be of better quality than the thing being made/measured with it, so if you are going to make the best you need a tool that is better then the best (typically meaning it costs a lot). If you ever want a hard-core engineering job, go and work for a company that makes the things that make things, whether that be machine tools, electrical measurement equipment, production lines or whatever.
For example, ASML make the tools that make chips. Keysight and Tektronix make the tools that measure production. They are just random names. The list goes on. The upper echelons of such companies are at the frontiers of engineering.
I don't think this is generally true, there are lots of examples where you measure something with higher resolution than the tool's manufacturing precision (for instance, an STM) and similarly lots of examples where you use a lower precision tool to build a higher precision tool (for instance, a mill or lathe but optical focusing would also allow for a shrink if it weren't for the diffraction limit).
But moving a generation forward in precision is good, you only ever get close to perfect with diminishing returns.
> I always say that important scientific discoveries arise from better tools.
This is something I hadn't thought about before, but it makes total sense.
When I read that, my first thought for current technological toolage that might lead to new discoveries is the massive transformer language models that Open-AI / EleutherAI / others are building.
Some basic extrapolation on how-far-how-soon technology like GPT-X has come suggests that future versions could (should?) be revolutionary for knowledge discovery.
You might be interested in older work by philosopher Larry Laudan. He posited mutual positive feedback (or co-evolution) between: a) better data; b) better models or theories; and c) better instruments. Basically agreeing with you.
Likewise, he also noted that constructing better instruments is theory-dependent, as better theory allows a more complete understanding of sources of error (not to mention entirely new technologies).
Laudan called his a "tri-partite" model, as I recall.
I bought an air filter, it has a PM 2.5 meter on it.
Everyday I'm discovering and understanding what drives PM2.5 in my house.
It's not really an engineering breakthrough, you could loophole it to PM2.5 meters are getting cheaper. But really to me, it's the measurement, just better data.
I could have bought a PM 2.5 meter years ago but culture held me back. I personally don't believe in working from home, but if you did, once again the tech has existed for a while there's no real breakthrough happening.
Get the tools to the people.
Why is there no free easy to use OCR (Tesseract = PITA), I don't think a breakthrough or discovery is needed there.
Cooking without burning things could really spike it. No behaviour change yet.
With the fireplace I will behave in a way that creates less spikes now, just more effort to be less smokey. And to light a fire well is a cool skill you can nerd on. I do want to get a better fireplace.
Somedays it just spikes, I don't know why. Perhaps pollen, I'll work it out one day perhaps.
I bought a Ultrasonic Incense, because I'm a nerd that's kinda techy and fun. Undecided if I like it.
And I always run the filter, wattage is less than a led light, and cost a little (Filters aren't real cheap) but way less than my 'longevity' drugs.
I fail to understand how this is related to the topic of scientific discoveries and the technology underlying it. Yes your personal discoveries might be based on purchasing a PM2.5 meter, but personal discoveries and understanding are not generally related to scientific discoveries or understanding. If you study e.g. quantum mechanics it will not generally advance our (as society) understanding of quantum mechanics. Maybe you will become a great quantum researcher and make discoveries but it will be your discoveries then that advance understanding
If you want to jump back the the main article around the philosophy of science then yes not relevant. I haven't read the book but I'm not talking temporal consciousness, phenomenologists and philosophy and things.
> it will be your discoveries then that advance understanding
This in practical reality doesn't really happen. The philosophy fails here.
I'm just talking analytically around what happens when people get a device that can measure 1/10 finer or how Excel has pushed science, not related to philosophy or theoretical ideas around purer sciences.
More specifically citizen science like this not for profit air quality map - https://waqi.info/ Tools to the people.
Without precision instruments, how can we know that light has a speed, that stars are not points of light on a crystal sphere surrounding the earth, that diseases are carried by microbes... General relativity was a way to solve anomalies in the orbit of Mercury, something only detectable with precise measurement. Without it, general relativity would make no sense: why use all these complex formulas when Newton gravity works just as well? Send Einstein's work to the 16th century and it will be slashed by Occam's razor.
The "tenth of a second" problem is just that, technology able to measure tenth of seconds opened new fields.
I don't mean that technology is more important than scientific research, but they go hand in hand, no good science without good observation, no good observation without good tech, and no good tech without good science.