A very popular analog computer patchable with 4mm banana jacks was the Comdyna GP-6. I went down a rabbit hole of reverse engineering it from photos and schematics years ago in order to build a version that would fit in a 3U panel for Eurorack modular synth interfacing. Never finished it, but the total BOM was... well, not much. The panel was going to cost more than the board+opamps+passives. I wanted to buy a real Comdyna but I remember the cheapest I found one for sale was ~$700 and I thought that was expensive. Not so much when you compare it to this unit which is housed in just open PCBs and a much smaller format.
If you are a student and want to learn a bit more about how operational amplifiers, resistors, and capacitors alone can solve differential equations... you can get an educational license for Matlab for nothing and Simulink can emulate all the analog circuitry you want and more.
Another approach might be using the Eurorack simulation software VCV Rack - If there isn't an set of modules you can install that do basic gain, integrators, multipliers, comparators, etc then it would be very easy for someone to write one.
> If there isn't an set of modules you can install that do basic gain, integrators, multipliers, comparators, etc then it would be very easy for someone to write one.
VCV Rack cannot do Zero Delay Feedback because each cable adds one sample of delay. You can do it inside your module, of course.
Simulink is of course the modern replacement for an analog computer and does a lot more. For open source there is Scilab xcos and underneath it OpenModelica.
It would be crude and inefficient but you also could use spice and rig up op amps just like a physical analog computer if you just want to learn or if spice is your comfort zone. Or if you want to add additional analog circuitry or spice models. It’s the 21st century so it will be fast enough computationally for small to medium sized models.
I'm not familiar with matlab or octave :) but I do know the discrete and continuous underlying mathematics involved here, and from the https://octave.org/ homepage, "the Octave syntax is largely compatible with Matlab", the answer is yes.
just to ELI5 for folks, the cool thing about analog computers (or digital approximations or simulations) is that since they are actually integrating (storing charge in a capacitor as the voltage varies) or actually differentiating (current through a capacitor varies with the voltage differential/rate of change) they can calculate what would be very complex to do on paper.
Or more plainly, if the rate that the water flows out of the drain of the bathtub depends on the water pressure above which depends on the depth of the water in the tub, you can study all the math involved (what you did in calculus) or you can just put water in the tub and set your stopwatch (analog computer); and electrons through or in a capacitor is just like water in a tub
this is basically all that's involved inside a music synthesizer. The complexity comes from stacking together a variety of different concepts once you understand that the concepts are stackable (like when they taught you about arithmetic being commutative, associative, distributive, reflexive, etc)
Easier with Julia, though, because the ODE and PDE support is better and you have better / simpler notation. Performance will be better as well, but for simple problems you can't tell.
For circuits a lot of them are represented by differential-algebraic equations (DAEs) and require modeling tools in order to handle the high differential index of the systems. This is the reason why they are typically handled via acausal modeling systems which can do index reduction. For Julia, this is the ModelingToolkit portion of the SciML ecosystem (https://docs.sciml.ai/ModelingToolkit/stable/), and some modeling tools like https://github.com/ModiaSim/Modia.jl and OpenModelica front-ends https://github.com/OpenModelica/OMJulia.jl.
I'm wondering if Modelica might be useful for this. Its an open source modeling language though I've never used it. Its basically writing models with differential equations for each of the components and then connect them together to form a system.
The creators of Modelica, Hilding Elmqvist and Martin Otter, work in Julia these days so there's been a boom to the development of acausal modeling tooling in Julia. They built https://github.com/ModiaSim/Modia.jl and then we collaborated with them on some ideas to build ModelingToolkit.jl (https://docs.sciml.ai/ModelingToolkit/dev/). The latest versions and experiments in OpenModelica are now targeting ModelingToolkit as a compilation backend, which gives a pathway for legacy Modelica code to be used with the MTK ecosystem. The OpenModelica community has always been a very small group in comparison to the Dymola compiler team and so the open part of Modelica never really caught on, but we hope to help them piggyback off of our compiler to get it there. We'll have a workshop on this collaboration in Linköping next February if you're interested in the details.
I have used https://openmodelica.org/. It's another one of those "invest substantial time understanding what they are doing, and you will likely be rewarded." Simple things are simple, but I found it took longer than I would have like to bang out e.g. a PID algo with a particular motor and load. If you want to try this, buy the book, and follow it to the end. Then keep learning, as you build more and more sophisticated simulators.
I'd argue that acausal modeling is much easier for circuit design. And there's quite a bit of research, including surveys of experts, that backs it up as well. I recently put out a video exactly on this topic that summarizes some of the research and shows some examples to this end (https://www.youtube.com/watch?v=ZYkojUozeC4). Acausal modeling definitely isn't something that is taught in most (American) engineering schools, but once you get the brain flip it pretty clearly can improve productivity, no matter which acausal modeling language is used.
Responding at the bottom of the thread here, and also just to say Hi to Chris above.
So in my top level I mentioned Simulink as a nice path for students to learn modeling circuits, and in a way then could use this path to simulate what an Analog computer such as that in the original link is running.
My early forays into numerical simulation and linear systems modeling were thru Simulink, and I've built models of some very big systems in Simulink and used them in the design process of various things- typically vehicles, automotive systems; modeling powertrains, electro-hydraulic controls, EV drive cycle range simulation, etc. That was all many years ago.
I then went down the path of learning a bit of OpenModelica and as I was intrigued by the acausal approach and the open source aspect. Like anything the learning curve was steep (steeper than simulink) and I never got past a lot of clunkiness.
Then I discovered Julia a few years ago and even though I've moved on to other work, I've followed how people using Julia are modeling and solving closely, and what Chris says about how building acausal systems can be easier, and the benefits seen in ModelingToolkit.jl combined with the full Julia ecosystem all rings so true. I'm really excited to eventually see a full on acausal, block-based, yet easily extendable Simulink competitor sprout from Julia devs because as it does it will be the killer app for the language for a lot of engineers, IMO.
Funny enough, now Mathworks implements an acausal modeling approach in the Simulink GUI domain with their 'SimScape' add-on, but I have not gotten a chance to use it.
Anyway, yeah I need to get a project going to utilize some of the latest happenings in Julia modeling, and maybe find a way to contribute. Cheers to you and the other devs, Chris!
Matlab has Simulink, but I don't think there is anything equivalent for Octave. However, Scilab has Xcos. In general, if you don't need total compatibility with Matlab, you should take a look at Scilab, because it is much more polished than Octave.
Very familiar-looking to anyone who has been following the modular synthesizer renaissance. Modular synths are basically analog computers, just not well-optimized for precision in most cases. All the basic building blocks of THAT are available as modules from a number of boutique manufacturers.
Yes but the emphasis is different. Modular synths typically come with oscillators; analog computers typically don't. But if you want an oscillator you can build one out of integrators in a feedback configuration.
Modular synths typically come with filters; analog computers typically don't. Analog computers do come with integrators which could be considered very weird lowpass filters. Likewise a differentiator can be considered a weird and always imperfect highpass filter.
Both typically come with adders, but a modular synth will call this a mixer. Both typically come with multipliers but a modular synth will often call a multiplier a ring modulator (and it might not multiply all four quadrants).
(Just to make things more confusing, at RF frequencies the word "mixer" typically refers to a multiplier rather than an adder. In the audio range, mixers are adders.)
Modular synths are designed for making sound; analog computers are designed for precise, much slower waveform generation. Its circuits might not even be capable of operating fast enough to make sound.
As long as everything is voltage controlled, your voltage ranges match, and your analog computer is set to generate voltages rather than currents you should be able to use an analog computer to generate very wild LFO-type control signals. It might not work fast enough to replace your regular (audio) oscillators however.
It was so cool to realize that NIN used one for making a song. Caustic had an emulated one with a lot of features, but I dont know if there are other DAWs with similar ones...
The value of analogue computation is limited. They are not universal computers. The problem with analogue computers is that there's a limit on the number of consecutive operations that they can perform to produce a useful result. With digital computers, errors can be corrected after each operation. Whereas with an analogue computer, evey step introduces noise which cannot be corrected. It cannot be corrected because in an analogue system any input value is 'valid'. In a digital system input values that are slightly higher or slightly lower than a digit can be corrected to the nearest digit.
Also, making analog computer components requires high precision. With digital components, the individual elements are driven to saturation, so much less precision is needed. This means digital components can be made very small. Indeed, this is why integrated circuits work so well.
While it's true that analog computers have limitations in terms of precision and error correction compared to digital computers, they can still offer some advantages in certain applications. For example, analog computers can be faster and more energy-efficient for specific tasks, such as solving differential equations or simulating physical systems.
That's cute. Almost exactly the same capabilities as the tube-era Heathkit EC-1 educational analog computer [1], but much smaller.
Analog computers are no fun without an oscilloscope. Once you can see graphs, you get intuition about how the inputs affect the outputs. If you only have a meter, you have to write down data and plot.
This is my EC-1 simulating a projectile fired from a cannon, driving a plotter; a Comdyna GP-6 showing 'chaos'; and Joe explaining "Bouncing Ball" on another EC-1, complete with original 3" Heathkit 'scope as output (close-ups of it in operation at then end). 66 sec vid clip from the West Coast Vintage Computer Festival, 2021 https://drive.google.com/file/d/1gbF9RZ_UZrAIBq10mJ_SmZru1Gt...
They should have built in one of these $20 oscilloscopes[1] in place of the LCD meter. And a speaker. Their Analog Thing costs over US$500, after all. Then you'd have a self-contained unit good for student use.
I’ve been toying with the idea of building a “learn analog electronics” course by having the student build a musical synthesizer one stage at a time, starting with dual tone generators that can be made to deliver frequencies and a selectable wave pattern, then through to evelope filters, modulator (which part of it exists as the wave selection from earlier), and all the controls to make it happen. Not sure if this has been done or if it’s a dumb idea.
It sounds like a great idea to me. I’ve been trying to carve out time to do more electronic tinkering and learning and that sounds like a neat set of projects.
It can't be kids--this is too advanced. It can't be engineers (or student engineers)--there are better, more practical learning tools. Maybe people with an extra $500 who just enjoy fiddling around?
It definitely has classroom energy, but for that to work out it would need to come with a whole curriculum, textbook, etc. And even then, I think a lot of parents would be asking why this oddball thing vs playing with regular 555 and opamp circuits on a breadboard.
Okay, fair enough. But I know whenever schools try to adopt, say Linux or OpenOffice instead of the "industry standard" choices, there always seems to be wringing of hands about whether the children are being properly prepared for their futures.
Definitely useful once calculus is looked at. I attended a"mathematics summer school"back in 1979 at the local atomic research facility. (I was at the end of year 10). It had an overall theme that looked at global oil production and consumption, modelled using exponential functions, integrals and the like. While it was focussed on FORTRAN and BASIC, they also had a massive analog computer (hooked up to an oscilloscope) which seemed to give more real-time insight as you tweaked those coefficients.
I just completed "instrumentation lab", a college physics class, where we built amplifier circuits (presumably for use with sensors in experiments, although we just used a signal generator). Seems like this would fit right in since the other portion of the class was oscilloscope training.
It's an educational board. I made a custom educational board for my kid a few years back (with a 16 bit digital CPU though) just to have something physical, with the possibility to be underclocked to extreme values and a bunch of LEDs in key points to illustrate the principle. I could have used an emulator, but something like that is 1000x better as it's bare metal and doesn't have any black magic under the hood.
The difference between the words emulate and simulate are difficult to grasp for me. One comes from the latin `aemulus` and the other from `similis`. One talks about imitation and the other about similarity. When people discuss the differences between these terms, they say things like one aims to be able to replace a thing, while the other aims to replicate the thing's internal state. Or, that one aims to replicate the external behaviour, and the other aims to replicate the internal state.
I somewhat discard these interpretations. My conclusion, is that emulation is about making something equal to something else under some circumstance, while simulation is about approaching emulation (under some circumstance), but not aiming or achieving complete emulation (under that circumstance). Basically, the difference between becoming equal and becoming similar. This is counter to popular usage I think, but popular usage is a bit of a mess, in my opinion.
The problem with simulation is that it might produce artifacts that a user might exploit ("hey, this is cool!") and then finds out that it only exists in the simulation, not in the real world.
The concept of perfectly accurate emulation lies at the core of formal definitions of computing such as Turing’s seminal “Turing Machine” introduced in “On Computable Numbers” way back in 1936.
I went down this rabbit hole recently and came to the conclusion that their is no real difference between those. When does something cease being an emulation and turn into a simulation?
Analog computers don't have infinite precision due to the presence of noise, so digital computers can emulate that with high-enough precision arithmetic.
Yes but there are equations (like stiff differential equations) that are extremely difficult to solve accurately with a digital computer but which are trivial on an analog computer.
Simulation is about mimicking another device or system. Emulation is about setting up a system that is logically indistinguishable from another irrespective of its implementation substrate and details thereof.
A thing is successfully ‘emulated’ when it is logically impossible to distinguish the difference between the system and its emulated counterpart.
Since, as you said in a sister reply, even one analog computer might be slightly different from another analog computer, and thus unable to emulate it, if you had the outputs of two different computers, one analog and one digital one simulating it to a high precision (higher than the noise of the analog one) how could you distinguish which was digital and which was analog?
If you can't, then this is a meaningless semantic discussion. The digital computer can emulate the analog one as well as any other analog computer can.
The point is that discrete computers can exactly and trivially emulate each other. The inability to emulate an analog computer by a digital or analog computer kind of is the whole point.
What was that application from a long time ago that had analog wiring sound systems that you had to manually (on screen) connect a wire between ports... and you could flip the rack from front to back?
--
One of my best friends growing up built a ton of analog mixers IRL while working at Melekko Heavy Industries... (I helped him a tiny bit create the CAD files for the CNC for the faceplates.)
Sure you can, you only need to simulate it to near some orders around the planck constant. And then you can go even further. Analog does not have infinite precision either
But due to limitations of physics, nothing will be continuous.
Is amount of water in a bucket continuous? No, you can count each individual atoms. So you can simulate that by using large enough integers. Same principle applies everwhere
These are assumptions. If you think the assumption of continuity is ridiculous, note that the definition of universal Turing machines requires an infinitely long tape (infinite memory), which of course conflicts with the finite memory of any actually implementable digital computer.
I am not saying you need a turing machine, a finite one will do since we are also dealing with a finite analog system. If analog system is finite and has finite states that we can measure, then a finite computer will just do fine
> A digital computer cannot emulate an analog computer: it can only simulate it to an arbitrary level of precision. That’s the whole point.
A modern digital computer can simulate this particular analog computer beyond the noise floor. Practically speaking, that means a digital computer can perfectly emulate this system therefore it's simply a toy or perhaps for aesthetics.
Question: One problem of current superlarge language models is their high inference cost. Wouldn't that be a perfect application for analog computers, since they are known to be power efficient? As I understand, the weights of a neural network are approximate values anyway, so it probably wouldn't be a problem when the calculations (vector multiplication or something like that?) aren't perfectly precise.
I assume this idea has long been considered and doesn't actually work for some reason. Perhaps because implementing an order of hundred billion parameters in an analog computer isn't technically feasible? That's about how many transistors are in a CPU, and those are presumably much simpler to implement than the weights ("synapses") of a neural network.
(I know there are special inference accelerator chips, like TPUs or co-processors on smartphone chipsets, but those seem to be fully digital and not actually "neuromorphic".)
This is essentially how the very first neural network, the Perceptron, was implemented in the late 50's. It even has potentiometer dials to set the weights, and they were driven by motors during training!
As for feasibility of doing analog inference today, it's an interesting idea and I'm not sure. Billions of parameters would mean billions of wires though, there may be substantial parasitic losses in building such a thing.
Is 499 EUR for that BOM fair? Genuinely asking... high end SoM modules with carrier boards and a metric ton of software can be obtained by half of that price, and op-amps aren't that costly.
If you are looking for documentation, try clicking the prominent "read the docs" link. Once you do so, you'll notice "THAT schematics" is the very first link after the overview.
Saw this a while ago. I don't know enough about math to know if this works as described...I hope it does! A tiny dedicated chip for AI inference jobs is probably the next game changer the industry needs to put AI in everyone's pocket. Too bad OpenAI is going to kill any chance at something like this with regulatory capture.
Actually, I have a Pixel 7 with the Tensor G2. I can't find the tera operations per second on it, but Google has said that its [1] 60% more powerful than their Tensor Coral (4 TOPS at 2W of power). That puts the G2 at about 6.4 TOPS.
The Mythic analog chip [2] is pumping 25 TOPS at 3W. For comparison, modern (digital) GPUs are doing 25-100 TOPS at 50W-100W...while costing >$1000. So, its a big jump.
You can model passive circuits with RLC ordinary differential equations (depending on the setup of the equations there's some algorithms such as Runge Kutta to solve them numerically). Afaik you can model some active components with ordinary differential equations, but I wouldn't be surprised if you had to resort to partial differential equations (More or less a "complete" electromagnetic simulation at that point).
A simple analog computer would be a slide rule (or disk). The advantage is that it doesn't need batteries, but usually has very low precision (about 1.5-2ish decimal digits).
Hell, you can even print it and have a calculator with mul,div,sin,log,√...
To me the appeal is that they more closely model "the Real World". Perhaps like modular synths, they invite experimentation, learning (maybe a bit like a graphing program?).
If you are a student and want to learn a bit more about how operational amplifiers, resistors, and capacitors alone can solve differential equations... you can get an educational license for Matlab for nothing and Simulink can emulate all the analog circuitry you want and more.
Another approach might be using the Eurorack simulation software VCV Rack - If there isn't an set of modules you can install that do basic gain, integrators, multipliers, comparators, etc then it would be very easy for someone to write one.