Hacker News new | past | comments | ask | show | jobs | submit login
‘A Swiss cheese-like material’ that can solve equations (upenn.edu)
167 points by ausbah on March 25, 2019 | hide | past | favorite | 52 comments



This is pretty cool. I don't know the nature of the calculation performed, but if it's an integral that took say 1000 clock cycles on a single core 1Ghz digital computer, and if it's true that the meta-material will scale down 1 picosecond for the same calculation, then this is around a 1,000,000x speedup. Further, if you can create a set of say 100 integrators and differentiators into which you can decompose all your complicated math operations, then you have yourself a very very fast general purpose higher order math module.



Those 1000 clock cycles get you a perfect digital answer that is the same every time. This analog version gets you a faster answer with less accuracy and repeatability. This may be useful in some situations, but I would not call it general purpose.


I'm just speculating, but I don't see why you can't quantize the output electrically. I guess I'm suggesting a mixed mode system, where an electronic cpu sends an array of values to an optical system that performs a calculation and returns a result electronically.


Sure you can quantize the output but that is not going to help your accuracy. Today's Ghz CPU is 64bit, today's Ghz ADCs are 8 to 16bits. This gets you ~3 to ~5 decimal sig figs for analog and ~15.9 decimal sig figs for 64bit floating point. That's assuming your analog side is perfect, which is unlikely to say the least.

http://www.ti.com/data-converters/adc-circuit/high-speed/rf-...

Also note the fast ones require 1+ watts per channel and cost $700+ each. Not cheap in power or in money.


Really reminds me of a special type of analog computer that wasn't mentioned, but was definitely used in old school engineering: using conductive paper to model all sorts of fields.

https://en.m.wikipedia.org/wiki/Teledeltos


Similar research using optical diffraction to build neural networks:

https://arxiv.org/pdf/1804.08711.pdf

TWiML podcast of same: https://twimlai.com/twiml-talk-237-deep-learning-in-optics-w...


Hey! I currently work on inverse design at the Nanoscale and Quantum Photonics lab [0], which is the approach the authors used in this paper.

If you have any questions (about the specifics of the paper, or, more generally, about the process), feel free to send them over :)

---

[0] nqp.stanford.edu


How many sig figs can you get out of this microwave implementation?

In my experience trying to get RF measurements repeatable starts to become tricky at around 1% error. The dielectric constant of plastic parts changes constantly, capacitors age, etc. How sensitive is this device to temperature, humidity, barometric pressure, etc?


It's unclear. Depending on the fabrication constraints/robustness constraints they added to the inverse design procedure (which they don't really mention in the paper), it could be either extremely robust or extremely sensitive. Generally, in terms of measurements, if the conditions are controlled, it's pretty easy to get small noise (especially in the case where these devices are so large, as is their wavelength) relative to most perturbations that happen in the lab. Overall, this is not really mentioned in the paper (unless I missed it), so I'm afraid I can't give you a direct answer on what they do, just that it is possible (via this method) to design a device which is relatively robust to these changes.


I've been reading up on the theory. Is there a way for me to start playing around with some of these ideas that wouldn't involve me joining a lab?

I'm talking about I guess simulators and some cheap kits to play around with photonics.


Sure! I think it's quite doable to do theoretical work. There's plenty of solvers available (at least somewhat open source ones) that you can use to mess around with 2D and 3D structures (though, of course, larger structures will take a much longer time to solve). If you're interested in the numerical parts, I'd even highly recommend making your own solver and such! (We can chat in more detail about that if you're interested).

More classical optics setups with some decent lasers can be found off-the-shelf (though I'd have to look for consumer-type kits since the toys we have in the lab are a little more than my budget could personally handle ;). Either way, this is the best way to start since much of the subject really is based on doing experiments with light polarization, interferometry, etc., that forms the base of much of the work here (and many of the means of measurement). This is what this paper does, essentially, with the huge structures they've created (except with microwaves, which require some specialized equipment to measure).

Now, if you're interested in doing experimental work with photonic crystals, this question becomes a bit more difficult since it's essentially required that you have a foundry and some amount of cash to blow (as almost everything is fabricated and would require scanning electron microscopes and such to verify). You can also ship off parts to places like TSMC (which would likely form the basis of a somewhat expensive hobby), which I think deal with some small-scale manufacturing, but the time turnaround is pretty large as is the cost.


Would you mind going deeper into the suggestion to write your own solver?

I have an idea for making microwave metamaterials, fairly cheaply, but don’t have the physics background to write the solver for the material structuring.

I have a heavy math background though, eg, dealing with convex optimization in the context of economics.


Sure! If you already have a good math background, essentially the only thing left is to do a "bit" of numerical computing. Steven Johnson at MIT has a good course starting this (with Julia! Which is awesome and I highly recommend) [0] and, for further information, it's worth looking at some books (of which there are plenty, as far as I can tell. I studied out of Chew's Waves and Fields in Inhomogeneous Media, but this book is quite out of date and not particularly pedagogically good).

Overall, the mathematics itself is not difficult (essentially, everyone is using some simple preconditioned CG method for solving the linear problems, along with some [sometimes smart, sometimes not] meshing), but generating robust solvers is, almost universally, still an open problem. Depending on what you'd like to simulate and such, you're going to have to make use of the different properties of the operators you're working with to get really good results. We can email and I can say a little more given more details of your project/potentially guide you in a slightly better direction.

----

[0] https://github.com/mitmath/18303/tree/fall16



Amusingly, the overhead view of their structure looks vaguely like an overhead view of the brain


Funny, it reminded me of the Plinko board from The Price is Right [0]. Strangely, this is almost exactly how I explain to myself in dumbed-down terms (armchair-quantum-physicising over here) of how quantum computation might actually work. I never imagined it could actually be mapped to a physical structure.

[0] https://en.wikipedia.org/wiki/Plinko


I was going to say, "that's the weirdest looking Swiss cheese I've ever seen", but I like your way better.


I didn't notice, but you're right, it really does!


I thought of ant tunnels, cross-section.


Given that the future of computing seems to push more and more towards DSP, this seems like it could be an exceedingly powerful technique. Especially if the tech could reach the goal of on-the-fly reprogramming.

I’ve wondered when we will see FPGAs integrated on die with a regular CPU for a similar purpose.


This work focuses on steady-state computing, but it could also be interesting to use transient physical behaviors to process time-varying signals

Maybe by modelling dynamical systems as "neural nets" as in: https://arxiv.org/abs/1806.07366 and https://arxiv.org/abs/1808.08412

Or by using complicated physical systems we don't even understand to build Echo State Networks: http://www.scholarpedia.org/article/Echo_state_network


For a novice like me, it sounds a bit like quantum computing, where rather than "digitalizing" a physical problem like we do with classic computing, we use physical properties of elements (like the spin of an electron) to solve it. How far does this analogy go ?


About halfway between digital and quantum computers.

  Wholly quantum - uses both wave and particle behaviours and entanglement
  Wave computation like in article - just waves
  Digital computation - just impulses or currents in conductors.


Very interesting, look forward to seeing this develop.

Reminds me of the analog delay-line memories from the 1940s to the late 1960s (https://en.wikipedia.org/wiki/Delay_line_memory#Mercury_dela...) and bucket-brigade devices of the 1970s. (https://en.wikipedia.org/wiki/Bucket-brigade_device) But in a whole-new dimension.


Does anybody know how difficult these problems are to solve once you've already calculated the relevant kernel?

Let's say I had that kernel stored in a database, is it just a matrix multiplication or two to calculate the solution? If so, doesn't that kind of invalidate the idea that this is a bottleneck problem that requires a speed-up? Especially once you take into account read/write speeds on the physical structure?

I guess my point here is the correct benchmark is pre-memoized code. I'd be interested how it performs against that benchmark.


> but if you want to change the shape of the room, for example, you will have to make a new kernel.

So you can't iteratively improve your room, unless you don't mind fabricating all the kernels.


> Scaling down the concept to the scale where it could operate on light waves and be placed on a microchip

> “We could use the technology behind rewritable CDs to make new Swiss cheese patterns as they’re needed,”

Could be not much different from compiling and running stuff on an FPGA.


Assuming these structures can be changed like an FPGA.

It’s not obvious how you can change the microstructure of a material to something you like in minutes.


Well, there are CD-RWs


For a 2D structure, maybe. What is the depth of the feature though? I saw those pictures as light shining along the plane, not across it (am I wrong?). If it’s across it’s not obvious that CDRW’s features are deep enough and the “empty” space transparent enough.

How about 3D micro-structures? It’s hard enough to make a one off 3D structure reproducible, never mind a changeable one.


>How about 3D micro-structures? It’s hard enough to make a one off 3D structure reproducible

Well, there are 3D printers :)


Disclaimer: I find this super cool. I love analog solutions to engineering problems.

This is about optical wavelength length scales -> feature sizes << 1500nm for infrared

3D printers’ feature sizes work for giga hertz waves (as a guestimate) assuming:

- The features can be printed - 3D have limitations after all.

- the materials that can be 3D printed are optically suitable.


I think there is a problem with this technique.

Computing a kernel is an inverse problem, which is more difficult than solving a system for a given right-hand-side.

Linear algebra tells us to never invert a matrix, and here we're actually fabricating the inverse physically.

Or, of course, perhaps I'm misunderstanding something.


You solve the inverse problem once. That’s takes a while.

Then you build it. More time.

Then you run it for thousands of different inputs. There’s the benefit.

For example:

You build a kernel describing the acoustics of a sound (ie a body of water) once.

Then you can solve instantly what and where sounds (acoustics) are coming from


Well, if the inverse is dense and the excitation as well, then I can see an advantage, because computing a solution by applying the inverse to the excitation would be O(n^2). But I suppose that usually the excitation would be sparse.


Is something like this composable? For example, if they build a kind of lambda calculus for their 'hardware kernels' they can represent arbitrary calculations (by composing them)? Because having to create a different kernel for every calculation seems quite costly.


It seems like it could be. This thing sounds like it focuses on Fredholm integral equations, the solutions should be composable.

These are basically equations g(t) = Integral_t K(t,s) * f(s),

where K(t,s) (the kernel) and g(t) are known, but f(s) isn't. In physical applications, t is usually time. A solution f(s) could be fed into the input of the next device, at least mathematically.


Just curious: how is this any better than an analog circuit (possibly an IC) solving the same problem?


It's faster and can be miniaturised more than analog ic's, possibly using less energy. Version used in article is proof of concept, selected for being easy to manufacture with cheap equipment.


I don’t know much about analog computing, but how well can analog computers solve PDE’s in 3 spatial dimensions?

I know that until digital computers came along, structural analysis was done with strains lacquer techniques, so the circuit way seems limited


The advantages won't be clear until they do more testing, but it might be possible to be more power efficient that silicon ASICs.


Continuous variable quantum computing is going to be huge.


This is the most clickbaity-y title I've seen in a long time! Not complaining, it definitely piqued my interest. As a Philly native, I love that uPenn is doing cool experiments like this.


For me, clickbait has a negative connotation, the expectation that the article won't deliver on the headline.

This article delivered. Such a cool experiment and field of study.


I agree - this article was not clickbait. The content delivered on the title.

Piquing your interest with the title is just good headline-writing. It only becomes clickbait when the title is a cynical perversion of the content.


Just as fishing bait entice fish to bite the bait, clickbait titles entice people to click the title.

If a title is interesting enough to get lots of people to click on it, it is by definition clickbait.

Titles for interesting articles should themselves be interesting, as boring titles for interesting articles would increase one's chance of missing them.

An interesting title is only objectionable is the article itself is not worth reading.


But the reason we call it bait and not just "fish food" is that it is there to trick the fish into doing something it does not want to do. Something is not necessarily bait just because it's interesting or enticing.


But there is no term like "clickfood" which would imply that the title is interesting and the corresponding article is worth reading.

So people fixate on the title and call any interesting, well-written title "clickbait" regardless of whether the article itself is worth reading or not.

I'm simply pointing out that such an enticing title is not necessarily bad if the article itself is worth reading.

The alternative is a boring title, which is a disservice to any article worth reading.


> But there is no term like "clickfood"

That's supposed to just be called journalism.


> the most clickbaity-y title I've seen in a long time

I thought I'd be opening an article about the human brain. Pleasantly, it's about an attenuation-based analog computer that for some reason requires metamaterials. The most novel aspect, for me, is the computational method used to design and direct the fabrication of the apparatus.


Can we ask one of these new types of machines, quantum computers etc, whether anti-gravity is possible ?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: