Not just similar, the same. If you look through the documentation you'll see that https://github.com/SciML/ReservoirComputing.jl is a collection of reservoir architectures with high performance implementations, and some of our recent research has been pulling reservoir computing to the continuous domain for stiff ODEs (think of it almost like a neural ODE that you do not need to train via gradient descent): https://arxiv.org/abs/2010.04004 . We are definitely digging through this paper with some fascination and will incorporate a lot of its advancements (reservoir-free architectures) into the software.
For implementations of delay embeddings, you can already find it in DynamicalSystems.jl (https://juliadynamics.github.io/DynamicalSystems.jl/dev/embe...) in the Julia ecosystem, setup to compose with DifferentialEquations.jl. I think there are still a lot of big questions on using this in the continuous space though, so many next steps.
Karl Friston’s dynamic causal modeling approach in neuroimaging has been doing this Volterra kernel NVAR thing for years now. Interesting that it’s being used more broadly, but surprising to call it reservoir computing.
The next generation Magic 8-Ball should use reservoir computing + a GPT-3-like model of all human knowledge to perform speech recognition and display the most likely answer to spoken yes/no questions! ... "Reply hazy, try again."
http://dataphys.org/list/pattern-recognition-in-a-bucket/
There was a Science paper on RC recently too:
https://www.science.org/doi/10.1126/sciadv.abh0693