Fire control computers, like on a navy ship, were faster than digital computers of the day. YouTube has a number of videos on them.
An opamp performs multiplication faster than a digital computer (speed of light vs a few cycles). It's not super useful on its own, but it does fit the criteria.
In Veritasium's video 2/2 on analog computers [0] they show some startup products near the end.
What no. Opamps don’t multiply and they don’t operate at the speed of light. They have some timescale that goes like their bandwidth, which depends on their feedback path.
Yes, feedback op-amps definitely have bandwidth limits. Although you can get ones in the gigahertz range now.
Analog multiplier ICs are available.[1] They're not common, and they cost $10-$20 each. Error is about 2% worst case for that one. There are several clever tricks used to multiply. See "Gilbert Cell" and "quarter square multiplier".
This is absolutely not true, the speed of analog circuits is (by a significant margin) determined by parasitic capacitance, inductance, and resistance of the components.
To put numbers to it, a typical high performance analog multiplier might have a loop length of 1cm for the feedback path. This circuit should theoretically operate at 30GHz, but realistically such circuits operate with a bandwidth measured in megahertz.
If your values are in the mechanical domain, doing a simple and fixed computation in the mechanical domain may be more efficient. An example would be a mechanical differential in a rear-wheel drive car [1], or a swashplate in a helicopter [2].
Those aren't examples of computation; they're examples of power transmission. If you found a way to compute the same information as a swashplate or a differential with a lower-cost, higher-speed, more reliable, lower-power device, it wouldn't replace the swashplate or differential. In fact we've had such devices for over a century, because the swashplate is just multiplying two quadratrue sine waves by constants and summing them, and the differential is just adding (or subtracting).
These are bona fide examples of computation, with results immediately consumed. Computation without output is sort of pointless.
They are not very different from a computation inside an injection controller of an ICE, with its results consumed within microseconds, as motions of injection valves. They key difference is the intermediate use of an electronic computer, an MCU, instead of a purely mechanical and pretty inflexible device, the camshaft.
Certainly we could replace a swashplate with some electric or hydraulic actuators driven by an MCU if we needed to compute something more complex than what a swashplate currently computes, much as we did with the camshaft. This is not very probable though, because a new system should also work unpowered to allow auto-rotation, to say nothing of higher reliability requirements than a system for a car.
My point is that, in that scenario, what replaces the swashplate is mostly the electric or hydraulic actuators, not the MCU. If it wasn't, you'd make the swashplate mechanism much smaller, lighter, and cheaper, even if you had reliability requirements your MCU couldn't meet.
In the north-pointing chariot or the Antikythera mechanism, the differential performed a computational function, with its action of transmitting power quite peripheral to that; in your car's rear end, it performs a power-transmission function, with its action of computation quite peripheral to that.
The same situation holds with transistors. You can use a 2N7000 to toggle a light or control a relay or a motor, or you can use it for (digital or analog) computation.
If you're using it in an NMOS NOT gate or the input stage of an op-amp, you're using it for computation, and so you wish it were smaller; it would work better if it were smaller because then it wouldn't need so much energy to turn it on or off. (For analog computation, you only wish it were smaller up to a point, because at extremely small sizes that makes it more sensitive to noise, but you wish it were really a lot smaller than a 2N7000.) A 2N5457 is generally better for an amplifier input stage, and the no-longer-available discrete signal MOSFETs are probably better for NMOS NOT gates. The N-MOSFETs integrated into a chip are enormously better at computation than a 2N7000.
By the same token, though, a 2N5457 or signal MOSFET is much worse than a 2N7000 at power transmission. If you're using it to PWM a motor, you wish it were larger; it would work better if it were larger because then it would be at less risk of overheating, be more efficient at a given current level, and be able to control a bigger motor. An IRF630 is a better power MOSFET than a 2N7000; an IRF540N is better still. But they're enormously worse at computation than a 2N7000.
Helicopter swashplates and differentials are very much on the power-transmission end of the spectrum, not the computation end, even though they cannot avoid doing computation as part of their job.
> You've got me intrigued -- what are examples of mechanical devices being faster than digital ones? Assuming you're talking man-made.
You might be able to build a fluid device to test a property faster than you can simulate the fluid dynamics in full detail. Perhaps not on the first iteration, but iterating small changes to get a desired result could certainly be faster than simulating it, for simple systems.
> The Water Integrator was an early analog computer built in the Soviet Union in 1936 by Vladimir Sergeevich Lukyanov. It functioned by careful manipulation of water through a room full of interconnected pipes and pumps. The water level in various chambers (with precision to fractions of a millimeter) represented stored numbers, and the rate of flow between them represented mathematical operations. This machine was capable of solving inhomogeneous differential equations.
It all comes down to the definition of "faster". Standard testing is based on binary computations, the idea that there is a finite answer. Take a fire control computer on a ship. It has maybe 30 inputs, all essentially analogue dials. It combines them into a continuous analogue answer, a firing solution for the guns (elevation + azimuth). It doesn't do that "X times per second" or to a particular level of accuracy. The answer is always just there, constantly changing and available to whoever needs it whenever they ask for it, measurable to whatever level of precision you want to measure. If you measure the output every microsecond, then it is a computer that can generate an answer every microsecond. But that speaks more to the method of measurement than the speed of the machine.
It's true that we measure the speed and precision of analog "computers" differently from how we measure them for digital computers, but it does not therefore follow that analog "computers" are all infinitely fast and perfectly precise. Any analog system has a finite bandwidth; signals above some cutoff frequency are strongly attenuated and before long are indistinguishable from noise. And analog systems also introduce error, which digital computation often does not. When digital computation does introduce error, you can decrease the size of the error exponentially just by computing with more digits, and there is no equivalent approach in the analog world.
For mechanical naval fire control computers the cutoff frequency is on the order of 100 Hz and the error is on the order of 1%. You won't learn anything interesting by sampling them every microsecond that you wouldn't learn by sampling them every millisecond.
Basically anything that has to do with processing an analog signal. It's always faster to do that with analog electronics rather than using a ADC, doing the computation in the digital domain, and then getting the result back to the analog world with an ADC.
One example, if I need something that when two switches are triggered will turn on a light bulb (basically an AND gate) it's obviously faster doing that with an analog (mechanical) device, that is the two switches wired in series, than acquiring the signal with a microcontroller and outputting a signal to turn on the light bulb.
Thinking about the industrial world, there are cases where you have constraints about speed and real time that make sense to do signal processing with analog components rather than digital ones. And that was always the case before computers where invented, by the way (missile guidance systems were purely analog, as one example, you can do a lot of stuff!)
> Spanish Catalan architect Antoni Gaudí disliked drawings and prefered to explore some of his designs — such as the unfinished Church of Colònia Güell and the Sagrada Família — using scale models made of chains or weighted strings. It was long known that an optimal arch follows an inverted catenary curve, i.e., an upside-down hanging chain. Gaudí's upside-down physical models took him years to build but gave him more flexibility to explore organic designs, since every adjustment would immediately trigger the "physical recomputation" of optimal arches. He would turn the model upright by the way of a mirror placed underneath or by taking photographs.
I'm trying to imagine and am totally stumped.