In the 1980s people in chemistry labs that needed to find the area beneath a curve used to print/plot the curve, cut out the area and weigh it. This worked quite well as they had accurate scales.
This reminds me of how the geographical center of the US was once found by balancing a cardboard cutout [1]; an accompanying document then explains the limitations and caveats of this method and of other methods which produce different results, while musing about the discrepancy between the ambiguity of the ask and the public's expectations of producing a clear and reproducible answer.
There was one experiment in physics lab course (undergraduate level) just ten years ago in which we also had to integrate like this. Not because we lacked better methods but to have experienced that it can be done this way.
I think it would actually have been easier to use current equipment than maintaining older but it was really about doing the work manually. Imagine it like having to learn to do mental arithmetics before being allowed to use a calculator. And so we got to write lists with data points or plot them with an X-Y plotter instead of automated collection of measurements. We hated it at times (noting a single value every 5 minutes is way more annoying that it might sound! The interval is too long to keep one busy and too short to do something meaningful in between) but in retrospect was the right way to learn to do experiments, in my opinion.
More generally, there was a dedicated instrument to calculate areas of a map or drawing, the planimeter, widely used before the advent of (affordable) computing:
Reading it, it's quite hard to believe it wasn't a joke, but it doesn't seem to be. It thanks R. Kuc of Yale (a professor of electrical engineering) for his expert review.
Google Scholar says it has 336 citations, most of which appear to be non-ironic[0].
The original paper was immediately followed (i.e. in the same issue) by two corrective articles[1], one titled "Tai's Formula Is the Trapezoidal Rule". So why did they publish it at all? This page discusses that question:
> The most basic numeric integration method is so simple I don't think it needed to be invented.
Exactly, the individual who came up with that paper shows a complete lack of imagination, not merely ignorance - how someone (scientist or not) lacks the ability to see the likelihood of such a simple method having been discovered before is quite puzzling to me... This happens to be closely related to what I was doing recently: implementing a very simple little tool to integrate raw data, (I don't know calculus well at all) but here is my thought process (and probably most peoples):
1. I just want basic integration of raw data, something simple no interpolation at this stage.
2. hmm just draw a polyline through the points - how do I find the area it outlines (figure that bit out quickly it's just basic trig then some simplification, all very intuitive).
3. Ok that's how I find the area, I wonder what this is called it's too simple to not have been discovered hundreds of years ago... goes and does some searching to find matching definition.
... What kind of person gets to step 3 and marvels at their re-invention of the trapezoidal rule and goes straight to publishing a paper? How is it they are "doing science"?
And how is it that the paper got reviewed and accepted for publication without somebody pointing out the obvious?
As a 1980s high school student my first thought of how I'd find the area under a curve with a computer was basically to pick random points and see if they were above or below the curve. At the time I had about one year of exposure to Apple II basic. My math teacher said "yeah that's the Monte Carlo technique."
When I was learning to use a chromatograph (barely a decade ago) the textbook still suggested the "plate stacking method" to figure out the AUC (area under the curve): basically see how many rectangles fit under the curve.
The user interface of scissors and scale is much faster than the human/computer user interface for numerical integration after fitting a smoothed curve to the measured data.
The human scissor operator, in theory, can exercise far better judgment at throwing out invalid data points than an algorithm, when fitting curves. (When I was measuring datapoint 15 I know I was distracted, no surprise its an outlier, I can safely disregard it completely, etc)
there's a remarkable feature of these old videos, they're so easy to follow. No matter the topic, analog gear-computers, car mechanics, wave principles & radio .. it's always fun yet quite precise. We've lost something here.
Incidentally, these were the computers in the early Heinlein books.
I was mulling the comment here a few days ago pointing out Heinlein's prescient depiction of networked computers, and thinking that the hard part would be having mechanical computers pick up the phone and sending electromagnetic pulses down the line. Perhaps pecking at telegraph keys ...
Ash the Android in Alien used a water based computer. Or maybe that was milk. It looks like he might have been had problems solving non-homogeneous differential equations, because the cream would rise to the top.
I know of two excellent examples of this. One is an entertaining modern attempt to recreate economic models with a fluid computer: https://vimeo.com/131690448
It was built in the 1950s to study the effects of various plans, including one proposal to divert all incoming rivers to "productive" use. It was eventually made obsolete by computer simulation, but it's still there. I bring a lot of my nerdy out-of-town visitors there; it's amazing to walk around on a 2-acre simulation.
> To better explain how it works, here is a description by Steven Strogatz of what I'm assuming is a comparable device. Built in 1949, nearly a decade and a half after Lukyonov's, it's called the Phillips machine, after its inventor, Bill Phillips.
In the front right corner, in a structure that resembles a large cupboard with a transparent front, stands a Rube Goldberg collection of tubes, tanks, valves, pumps and sluices. You could think of it as a hydraulic computer. Water flows through a series of clear pipes, mimicking the way that money flows through the economy. It lets you see (literally) what would happen if you lower tax rates or increase the money supply or whatever; just open a valve here or pull a lever there and the machine sloshes away, showing in real time how the water levels rise and fall in various tanks representing the growth in personal savings, tax revenue, and so on.
“It’s a network of dynamic feedback loops,” Strogatz further writes. “In this sense the Phillips machine foreshadowed one of the most central challenges in science today: the quest to decipher and control the complex, interconnected systems that pervade our lives.”
> he water level in various chambers (with precision to fractions of a millimeter) represented stored numbers
I wonder how did they deal with thermal expansion. Maybe instead of using water at room temp they heated it to something higher and then a thermostat took care of it? But it still would be very hard to distribute heat evenly.
Thermodynamics lab tech was pretty advanced by that time.
The part that reads bogus to me is the fractions of a mm, water meniscus is sensitive to contamination and diameter, its "always" been easier to measure liquid masses to higher sig figs than to measure liquid volumes, I suspect the journalist filter turned mg into mm or the precision scale produced repeatable results equivalent to fractions of a mm in the container. The only solution I can come up with that would work in that era would be something weirdly optical involving mirrors and multiple floats.
I vaguely remember a quarter century ago in a chemistry lab placing a beaker of distilled room temp water in a very nice precision scale as a demonstration while the instructor had us calculate how long it would take the inch or so of water to evaporate based on the slowly decreasing mass, and the result during a dry winter was somewhere around a month. The room must have been sealed and 100% humidity because small fractions of a mm in height would represent in a very hand wavy way around a minute of evaporation if represented as time in normal winter lab air. I also wonder how volume of water changes as CO2 is absorbed or emitted by the water, even the smallest gas bubble could mess up fraction of a mm measurements.
>The U.S. Army Corps of Engineers Bay Model is a working hydraulic scale model of the San Francisco Bay and Sacramento-San Joaquin River Delta System. While the Bay Model is still operational, it is no longer used for scientific research but is instead open to the public alongside educational exhibits about Bay hydrology. The model is located in the Bay Model Visitor Center at 2100 Bridgeway Blvd. in Sausalito, California.
I'm not sure how different this is to the ball integrator?[0] or Vannevar Bush's differential analyser?[1] The water computer article says partial differentiator, then integrator, it's unclear.
Fun fact, the device you're referring to was actually based on a different real-life water-based computer, the MONIAC[1]. It used basins of water, pipes, and adjustable flow regulators as a physical metaphor for the movement of money in macro-economics. I'm personally a really big fan, and I've need meaning to work up a simulation for quite some time now.
Similarly there was a story about a ~computer based on water surface tension to encode complex equations and let the physics approximate a solution in parallel. Can't find the name though.
Found an online link to one of the articles Dewdney wrote for Scientific American about "analog computers" (this is the second article, there was one a year earlier):
If I recall correctly, the world3 model which the authors of 'the limits to growth' used also utilized fluids to simulate the increasing, or diminishing influence of different factors on our ecosystem.
It's not connected to anything, so probably not. However back when we used purpose-built electromechanical switching systems for telephones, people exploited them[0].
If the phone system used water instead of electricity for in-band signaling, would phone phreaks have to inject blue fluid into the pipes to exploit them?
If water pipes were used for signalling, it would probably be as acoustic conduits (analogous to wires, which are electromagnetic wave conduits). You would exploit them in just about the same way as the electromechanical systems were.