This is pretty cool. I don't know the nature of the calculation performed, but if it's an integral that took say 1000 clock cycles on a single core 1Ghz digital computer, and if it's true that the meta-material will scale down 1 picosecond for the same calculation, then this is around a 1,000,000x speedup. Further, if you can create a set of say 100 integrators and differentiators into which you can decompose all your complicated math operations, then you have yourself a very very fast general purpose higher order math module.
Those 1000 clock cycles get you a perfect digital answer that is the same every time. This analog version gets you a faster answer with less accuracy and repeatability. This may be useful in some situations, but I would not call it general purpose.
I'm just speculating, but I don't see why you can't quantize the output electrically. I guess I'm suggesting a mixed mode system, where an electronic cpu sends an array of values to an optical system that performs a calculation and returns a result electronically.
Sure you can quantize the output but that is not going to help your accuracy. Today's Ghz CPU is 64bit, today's Ghz ADCs are 8 to 16bits. This gets you ~3 to ~5 decimal sig figs for analog and ~15.9 decimal sig figs for 64bit floating point. That's assuming your analog side is perfect, which is unlikely to say the least.
Really reminds me of a special type of analog computer that wasn't mentioned, but was definitely used in old school engineering: using conductive paper to model all sorts of fields.
How many sig figs can you get out of this microwave implementation?
In my experience trying to get RF measurements repeatable starts to become tricky at around 1% error. The dielectric constant of plastic parts changes constantly, capacitors age, etc. How sensitive is this device to temperature, humidity, barometric pressure, etc?
It's unclear. Depending on the fabrication constraints/robustness constraints they added to the inverse design procedure (which they don't really mention in the paper), it could be either extremely robust or extremely sensitive. Generally, in terms of measurements, if the conditions are controlled, it's pretty easy to get small noise (especially in the case where these devices are so large, as is their wavelength) relative to most perturbations that happen in the lab. Overall, this is not really mentioned in the paper (unless I missed it), so I'm afraid I can't give you a direct answer on what they do, just that it is possible (via this method) to design a device which is relatively robust to these changes.
Sure! I think it's quite doable to do theoretical work. There's plenty of solvers available (at least somewhat open source ones) that you can use to mess around with 2D and 3D structures (though, of course, larger structures will take a much longer time to solve). If you're interested in the numerical parts, I'd even highly recommend making your own solver and such! (We can chat in more detail about that if you're interested).
More classical optics setups with some decent lasers can be found off-the-shelf (though I'd have to look for consumer-type kits since the toys we have in the lab are a little more than my budget could personally handle ;). Either way, this is the best way to start since much of the subject really is based on doing experiments with light polarization, interferometry, etc., that forms the base of much of the work here (and many of the means of measurement). This is what this paper does, essentially, with the huge structures they've created (except with microwaves, which require some specialized equipment to measure).
Now, if you're interested in doing experimental work with photonic crystals, this question becomes a bit more difficult since it's essentially required that you have a foundry and some amount of cash to blow (as almost everything is fabricated and would require scanning electron microscopes and such to verify). You can also ship off parts to places like TSMC (which would likely form the basis of a somewhat expensive hobby), which I think deal with some small-scale manufacturing, but the time turnaround is pretty large as is the cost.
Would you mind going deeper into the suggestion to write your own solver?
I have an idea for making microwave metamaterials, fairly cheaply, but don’t have the physics background to write the solver for the material structuring.
I have a heavy math background though, eg, dealing with convex optimization in the context of economics.
Sure! If you already have a good math background, essentially the only thing left is to do a "bit" of numerical computing. Steven Johnson at MIT has a good course starting this (with Julia! Which is awesome and I highly recommend) [0] and, for further information, it's worth looking at some books (of which there are plenty, as far as I can tell. I studied out of Chew's Waves and Fields in Inhomogeneous Media, but this book is quite out of date and not particularly pedagogically good).
Overall, the mathematics itself is not difficult (essentially, everyone is using some simple preconditioned CG method for solving the linear problems, along with some [sometimes smart, sometimes not] meshing), but generating robust solvers is, almost universally, still an open problem. Depending on what you'd like to simulate and such, you're going to have to make use of the different properties of the operators you're working with to get really good results. We can email and I can say a little more given more details of your project/potentially guide you in a slightly better direction.
Funny, it reminded me of the Plinko board from The Price is Right [0]. Strangely, this is almost exactly how I explain to myself in dumbed-down terms (armchair-quantum-physicising over here) of how quantum computation might actually work. I never imagined it could actually be mapped to a physical structure.
Given that the future of computing seems to push more and more towards DSP, this seems like it could be an exceedingly powerful technique. Especially if the tech could reach the goal of on-the-fly reprogramming.
I’ve wondered when we will see FPGAs integrated on die with a regular CPU for a similar purpose.
For a novice like me, it sounds a bit like quantum computing, where rather than "digitalizing" a physical problem like we do with classic computing, we use physical properties of elements (like the spin of an electron) to solve it. How far does this analogy go ?
About halfway between digital and quantum computers.
Wholly quantum - uses both wave and particle behaviours and entanglement
Wave computation like in article - just waves
Digital computation - just impulses or currents in conductors.
Does anybody know how difficult these problems are to solve once you've already calculated the relevant kernel?
Let's say I had that kernel stored in a database, is it just a matrix multiplication or two to calculate the solution? If so, doesn't that kind of invalidate the idea that this is a bottleneck problem that requires a speed-up? Especially once you take into account read/write speeds on the physical structure?
I guess my point here is the correct benchmark is pre-memoized code. I'd be interested how it performs against that benchmark.
For a 2D structure, maybe. What is the depth of the feature though? I saw those pictures as light shining along the plane, not across it (am I wrong?). If it’s across it’s not obvious that CDRW’s features are deep enough and the “empty” space transparent enough.
How about 3D micro-structures? It’s hard enough to make a one off 3D structure reproducible, never mind a changeable one.
Well, if the inverse is dense and the excitation as well, then I can see an advantage, because computing a solution by applying the inverse to the excitation would be O(n^2). But I suppose that usually the excitation would be sparse.
Is something like this composable? For example, if they build a kind of lambda calculus for their 'hardware kernels' they can represent arbitrary calculations (by composing them)? Because having to create a different kernel for every calculation seems quite costly.
It seems like it could be. This thing sounds like it focuses on Fredholm integral equations, the solutions should be composable.
These are basically equations g(t) = Integral_t K(t,s) * f(s),
where K(t,s) (the kernel) and g(t) are known, but f(s) isn't. In physical applications, t is usually time. A solution f(s) could be fed into the input of the next device, at least mathematically.
It's faster and can be miniaturised more than analog ic's, possibly using less energy. Version used in article is proof of concept, selected for being easy to manufacture with cheap equipment.
This is the most clickbaity-y title I've seen in a long time! Not complaining, it definitely piqued my interest. As a Philly native, I love that uPenn is doing cool experiments like this.
Just as fishing bait entice fish to bite the bait, clickbait titles entice people to click the title.
If a title is interesting enough to get lots of people to click on it, it is by definition clickbait.
Titles for interesting articles should themselves be interesting, as boring titles for interesting articles would increase one's chance of missing them.
An interesting title is only objectionable is the article itself is not worth reading.
But the reason we call it bait and not just "fish food" is that it is there to trick the fish into doing something it does not want to do. Something is not necessarily bait just because it's interesting or enticing.
But there is no term like "clickfood" which would imply that the title is interesting and the corresponding article is worth reading.
So people fixate on the title and call any interesting, well-written title "clickbait" regardless of whether the article itself is worth reading or not.
I'm simply pointing out that such an enticing title is not necessarily bad if the article itself is worth reading.
The alternative is a boring title, which is a disservice to any article worth reading.
> the most clickbaity-y title I've seen in a long time
I thought I'd be opening an article about the human brain. Pleasantly, it's about an attenuation-based analog computer that for some reason requires metamaterials. The most novel aspect, for me, is the computational method used to design and direct the fabrication of the apparatus.