I'm building something like that, but I don't know if enough people read enough papers. Do you read a lot of academic papers or otherwise news articles online?
Surprised nobody has called out the latency aspect: this is a ML solver that can give you the answer in a single picosecond. It could resolve a thousand ML queries before your CPU based solver had its first clock tick on the first byte of the first pixel.
That could have some interesting applications for things that need it. Or scale: a gazillion ML calculations per second. With zero power consumption. From a piece of glass.
Well it is a very simple solver, you can't change it after it's synthesized and it's not true it uses no power - you need strong enough incoming light. That's your power source. It's probably safe to assume that each layer removes some power in a mutiplicative fashion, therefore it's only so much you can do in a single glass step before the beam dissipates.
And as far as timing goes - 1 cm of glass delays by ~50 picoseconds. That corresponds to around 20 gigahertz. Fast, but not mind blowingly fast.
Well it's MNIST - most basic and classic OCR benchmark out there. Also, no info on how much light power is actually necessary, though I'm sure it's exponential in the number of layers. And another thing I forgot to write is: how are you going to interface with this thing?
Glass is 100% recyclable. It's conceivable that a robot could be built to fabricate ML models and meltdown / update old models when necessary. That would be pretty steampunk.
The device is not 1 cm, it is 'microns each side', right? That should make it roughly 10000x faster than 20 GHz, which is quite mind blowing for me. Especially if you imagine multi-layer superstructures for resolving very complex tasks with small amounts of energy.
The latency here is dominated by converting the 2D image to a 1D wavefront that's input to the device. This stage would involve some digital logic and relatively slow components with response time on the order of milliseconds.
This is the usual pipeline problem, though: sometimes the bottleneck is the CPU, and sometimes the bottleneck is memory bandwidth. This just places the ball firmly in the memory bandwidth court...
(You can have a hundred worker CPU cores doing the necessary conversions, but just need to worry about the parallelization complexity. But, then again, this is exactly what already happens when we feed data to hefty devices like GPUs and TPUs.)
Blimey, this is really stretching the definition of AI, surely? It’s a piece of glass that has been designed by a lot of trial and error to perform one specific task. It sounds like humans were analysing the fitness and making the modifications. I wouldn’t call it “unpowered AI” any more than I’d call a coin sorter that.
Because it uses glas as a substrate and light as a information carrier?
Most of what we call “AI” today also uses prelearned weights for their neural networks and in many use cases these weights are not touched after deployment.
I don’t see why a neural network encoded in glas should not be an AI while the same neural network on a computer is one — either you have to call both AI or neither.
Most of what we call AI can learn and update to an extent, in order to be able to match a wider verity if I puts or to improve accuracy on existing inputs, this is a hardcoded solution. If this counts as AI, then every bit of software ever written also counts as AI, which makes the term even more meaningless and marketing buzzwordy than it already is.
I suppose the question is, if an AI has learned and you export that final learned state to use in a now-hardcoded classifier, is that classifier still AI (or part of the overall AI) or is it simply the output? I can imagine arguments on both sides. If you accept that as AI, then sure, this fits the bill!
> Most of what we call AI can learn and update to an extent
Most of what we call AI are hardcoded solutions once in production. There may be ongoing offline improvements being made, but once the improvements are established the production AI is replaced with a point-in-time snapshot of the AI undergoing offline training. Self-learning in production causes all kinds of problems, but most significantly it's a security issue since it gives an attacker the ability to manipulate behavior by curating examples.
It's comical how little people understand about machine learning, no one calls an ODE solver artificial intelligence but gradient descent on an interesting equation is somehow now A.I.
Why does this system have to be hard coded? Certainly you could auotmate the glass fabrication technique with a computer and robotics – I'm just imaginibg that was beyond the scope of this study.
An equivalent in neural nets would be some model trained and then after the results are satisfactory, burned into an FPGA, that FPGA is not AI and so is this glass. Or both are.
At least the neural networks were at some point self-assessing and self-modifying, and plausibly could be said to “learn” something. Here it seems more plausible to say that the humans learned what structure to produce than that the glass did.
But you’re right, I think many “AIs” shouldn’t really be named that either!
They didn't manually adjust the glass until it worked (which would be infeasible), they wrote a differentiable simulator and used it to determine the material to use at each point via gradient descent, which is quite a feat.
That's exactly as self-assessing and self-modifying as a neural network implemented using any other kind of computation substrate.
I skimmed the paper linked there. They did use a digital model for glass-impurity substrate, to adjust the location of the impurities. This doesn’t sound much different than training with activation weight propagation — except, here one can literally see those weights. I don’t see why it wouldn’t fit usual definition of a neural network.
It may have been designed using AI, but it's not AI.
The impurities are not the weights, they are the output of the design software. It is the design software that has been learning something, not the pane of glass.
It is like using AI to design, say, the most aerodynamic plane.
Only here they used an AI to design something that performs a task that we traditionally use as a benchmark for AI models. But this piece of glass, just like the plane mentionned above, is not learning anything and it's not an AI.
Thanks for giving this analogy, It made more sense than my imagination above.
If I understand correctly, it’s the design process, not glass, that used learning. Along the same analogy, I guess the sculpture in London (glass, here), which was designed using random walk (neural nets), would be the same: the sculpture in itself isn’t “random walk”, but the design process was.
Edit: I read the other comments and it’s getting more confusing! AI, from my school courses, would be implementation of algorithms like Hill climbing where a system is online: it takes some input, and tries its best to find a solution. Now if I take the output itself for use in, say, signal processing — that “output” would be a “device” to do something and won’t be an “AI device”. Does this make any sense at all? I’d love to get some pointers on this to read.
It hardly passed the litmus paper test for what we class as AI. Though that said, by their definition, litmus paper is equally AI as it takes an input and produces an output.
Just seems a bit like being told you can buy a flying car, only to find out that it's a short flight time, not flying very high and depends upon approaching a ramp at speed.
> The AI works by taking advantage of small bubbles deliberately incorporated into the glass, as well as embedded impurities such as graphene. As the light waves reflected off the handwritten numbers pass through the glass AI, they are bent in specific ways by the bubbles and impurities. This bending process focuses the light onto one of ten specific spots on the other side of the glass, depending on the number presented. (...)
> Training the AI involved presenting the glass with different images of handwritten numbers. If the light did not pass through the glass to focus on the correct spot, the team adjusted the size and locations of impurities slightly. After thousands of iterations, the glass “learned” to bend the light to the right place. (...)
This sounds pretty much like they're training a neural network.
I agree, but then again how is this different from using pretty much the same mechanics but in software form? Most “AI” that is used in consumer apps uses pre-made models, that are then applied to input data. This seems very similar.
It's not different, that's the point. We already know that these "mechanics" are very powerful, and now this paper shows that we can get results at nearly the speed of light.
There are two different kinds of neural networks. Large models are fed lots of data to train them. But when the training is done, the results can be closely approximated by a much smaller network. The smaller networks can be distributed more easily and even used on phones. But they are fixed, not used for training anymore.
It's definitely AI - it's still recognizing a handwritten digit. But not all AI applications require further training. Sometimes you just need the end result.
The issue is what is defined as "AI". From [1], some of those definitions include
- Thinking Humanly - the ability to problem solve, introspection, and learning (Cognitive Science)
- Thinking Rationally - the ability to perceive, reason, and act (Logic)
- Acting Humanly - simply do things that (at the moment) humans do better (Turing Machines)
- Acting Rationally - the design of intelligent agents (the focus of the textbook, Betty's Brain[2])
Yes, they all vaguely sound the same, but the point is, if you took the glass and had to select which definition it marks off, which would it be? My point is what about this glass makes it "intelligent"? The end of the article starts to talk about a combination of AI-glasses could form some sort of efficient image recognition process. Since we still don't have a clear definition of what intelligence is, is it simply a combination of tiny little perceptrons (from neural networks) that have specific differentiation tasks?
What people call “AI” is actually a long historical process of crystallizing collective behavior, personal data, and individual labor into privatized algorithms that are used for the automation of complex tasks: from driving to translation, from object recognition to music composition. Just as much as the machines of the industrial age grew out of experimentation, know-how, and the labor of skilled workers, engineers, and craftsmen, the statistical models of AI grow out of the data produced by collective intelligence.
I believe the definitions of “intelligent” and “adaptive” are intertwined. I’m not sure what the distinction is between those two words, but I know they are connected.
Because this system can’t adapt, I agree it is probably not meaningfully “intelligent”.
Agree, I would say its more like a logic-tree or more like an optical circuit. Even academics are choosing hot names for papers, basically in CS today, if your paper does not have AI in it, good look getting attention.
To be clear, it doesn't seem like New Scientist realised this, but they haven't actually made a glass block that does this. They've just simulated one.
Yea, the part of the article where they described the training procedure as if they were re-arranging impurities in a physical piece of glass beggared belief.
They've simulated a physical system using digital hardware and software that would operate like a read-only optical neural network analog if it existed in a physical substrate.
I suppose the next stage is to develop another digital analog of a physical system which can decide for readers if the original digital analog of a physical system is "real AI".
Presumably with sufficient training it would be able to read and assess New Scientist articles about its own development.
What would be the practical applications of something like this?
+ Can you embed it in a mirror to portray information if a face is recognized?
+ Or can you make glass semitransparent when something is recognized?
+ What about windows that blur anything that looks like a face? Preferably keeping other objects the same. A bit of an autoencoder where only certain objects are blurred.
+ Can the amount of light shining on it lead to different results? Then you can show a little movie by shining more and more light through the glass using it in reverse.
It's a simple embodiment but I guess there are thousands of applications we haven't thought of yet. Really cool train of thought!
>Can the amount of light shining on it lead to different results? Then you can show a little movie by shining more and more light through the glass using it in reverse.
I don't think movies, but some monotonic changes should be possible using critical refractive angles.
It now also reminds me of the controversial Moon Mode on the Huawei P30 Pro. On smartphone cameras you might now incorporate things in the lens rather than in the firmware. Be it improved lunar pictures or snapchat/instagram filters...
This sounds like they've rediscovered the digital sundial and are running it in reverse.
Light goes into face A of a digital sundial predominantly from a particular angle and comes out face B patterned like a number. Presumably if you shined light in the pattern of a number on face B, you'd get light coming out of face A predominantly at a particular angle.
Except the if you ran a digital sundial in reverse it would only recognize one particular image of a number. From TFA it seems to be able to recognize multiple handwritten images.
> Except the if you ran a digital sundial in reverse it would only recognize one particular image of a number.
Only because the sundials are purpose-built for producing clear digits. This would produce blurry messes that, if you squint hard, would look a bit more like one digit than like the others. But the principle doesn't appear to be different.
This doesn't "recognize" anything. It's just a complex waveguide. You can see it in their ray path diagrams.
'This doesn't "recognize" anything. It's just a complex waveguide. You can see it in their ray path diagrams.'
Which is how classifiers work. Which is also how our brains work. The only real difference is the medium. Recognition is a reliable ability to convolute a large form into a very small amount of information, distinguishable between forms. When I look at a hand drawn "8", I don't store all the curves I see as the number 8, I have an abstraction of the number in my head, and I signal recognition by activating that abstraction, very similarly to how this glass computer functions. How else does one recognize something?
Well, this plate doesn't choose which output is the correct one. Something else has to look and decide which number-correspondent-band is the maximal/correct one. But I don't want to argue that specific distinction. If you want to call the waveguide a recognizer, I'm ok with you doing that.
The whole presentation of this article is very "woo". They say bullshit like "the glass learned to bend the light". This is clearly stupid and false. The system DESIGNING the glass learned how to construct the glass such that the light bended appropriately. The glass hasn't learned shit.
Picking which of the 10 bands is the brightest has got to be trivial, so much more so than directly recognising eg. '4', so it does seem to do what it says.
> The system DESIGNING the glass learned how to construct the glass such that the light bended appropriately
With a bit of rephrasing you could say exactly the same about a neural network surely. The learning was embodied in the glass, so depending on definitions, the glass was 'taught' something. It 'learnt' something.
I kind of see what you're saying, but it does seem to be discounting something actually interesting and new (in my experience).
“Rediscovering” is a bit of stretch here, given that sundials do not perform multi-tier machine learning network evaluations.
Our grasp of materials science to date has not permitted compact, stable, passive optical computational systems at the degree necessary to implement digit parsing. That’s the advancement shown here.
I am bothered by this use of the word ‘intelligence ‘. To my mind, intelligence is a comparative term, like height, weight, etc. if the universe consisted of one individual, how could you say tall or short without some reference to a norm or population. Further, the idea of multiple individuals implies separate copies of a specific thing/person that can communicate . So how can we talk about the intelligence of a piece of glass? Humans been fascinated for a long time with animals and automatons that can mimic human behaviors- is a poodle that walks more intelligent than a beagle that does not? Is a mechanical Turk in a box intelligent? I wonder how long it will be before someone discovers the artificial intelligence embodied in mechanical clocks - they convert the stored energy of a spring into a sequence of numbers
I wonder if one can create a similar auditory transfer function that is attuned to a specific voice.... basically evolve a sound cavity of specific shape
You could have a large capture cavity initialy that has various rods of materials that would sympathise with certain frequency ranges. Those rods run into their own chamber's - seperating the frequencies.
Problem is with voice, it's the cadence, content and variations in the tones that help discern an individual. For that you need to factor in a variable amount of past data as you can't just take a snapshot - think taking a picture of a single frame in a movie and knowing what the entire movie is about.
However, you could make something that needs a certain set of frequencies, so with that something that could identify males or females by the sound of the voice and the general rule of male voices being lower in frequency. That would be a start. Though even that simple task, you start to see exceptions.
But if you can flowchart it and have ways to handle the logic, who knows what is possible and impossible.
Probably a cheat, but a piezo disc setup would give you zero power audio capture and with others to power basic processing, you would be able to do something, though may find yourself shouting during testing, though a capture cone to focus the audio onto the piezo would help immensely. Though I'd seriously have to sit down and work out what surface area of pezios and vibration levels and as such power you could produce. Though could mitigate that with some larger capacitors to capture all the passive sound and low level power the piezo's will capture/produce during the day. Triggering processor to check for your keyword and instigate an action.
This is essentially a kind of continuous 3D holography - unlike conventional holography, which is a 2D transform that represents 3D light paths.
And that's very interesting. This is the first application of what could be a very very powerful new technology.
So far as I know addressable super-high resolution 2D holographic displays are still a lab curiosity. There are some obvious technical problems with adding depth, but as soon as you have a 3D addressable light matrix that can be reconfigured dynamically, you have a whole new base for physical computing.
It's a special kind of lens or prism, and "headline-grade buzzword choices" notwithstanding, this is yet another very interesting result out of the University of Wisconsin - Madison.
As others have stated here, this construction is due to machine learning whether or not the device itself has a capability to adapt further.
Indeed, certainly not AI as we know it and "Analogue computer" or in this case would be more apt, even "optical computer" would be more palatable than "AI".
Makes you wonder what kind of spin a "Hello World" program would glean from some journalistic minds.
That so nails it I found myself clicking your profile and expecting some media outlet reference [chuckle]. Which means I've either gone full cynic, or the lines between parody and news have blurred so much that I'm seeing double. Though I can still enjoy a chuckle and thank you for that.
We don't have AI as we know it either; this would be a truly earth shattering discovery if it were. But they have created a functional, reproduceable way to build unpowered glass classifiers, which is ridiculously cool, and opens up quite a few new scientific doors.
AI needs an IQ scale as you are right and what we as a mass class as AI is not what marketing classes as AI, it's a scale and with that some way to discern this is needed. An AI equivalent of the IQ scale would go a long way in allowing the masses to instantly comprehend what they are getting.
Number system convertor using light and glass - core logic
Alphabets to ascii code convertor - Application 1
ASCII code to binary convertor - Application 2
Imagine, if convertors are placed on top of one another.
Source of light need not be a problem as we can stack lenses to power it up.
Based on reading other's comments, I can see they are comparing it to an AI.
Its a problem solver using light and stacked glass. It uses structure from an AI. Neural networks filters data. you can use a digital(electronic) device or purely electrical
Input - Processing - Output - Conventional computation engine, it doesn't say it should be electrical / mechanical. First computer was a mechanical device.
This uses light!
Here is another application.
How can a blind person tell whether it is day or night ? give a light bending device that will focus light on the person's palm. If it heats up, its day.
Use a stacked light bending layers to get a yes/no answer.
They must be restricted to linear models, right? So this is basically an implementation of logistic regression? Doing nonlinear operations optically is much trickier, and almost definitely active instead of passive.
EDIT: Actually they describe some nonlinear material inclusions in the paper, interesting.
You could cast thousands of these into space with lenses on them and they would have just enough power to send a signal when they detect an extraterrestrial planet or something.
Power free, kinda. Still needs light though. Begs the question whether this would be more efficient or a solar powered tpu with the same amount of light.
it is very surprising how many people here resent the idea of an analog solution. this is an irrational mental block - anything that is not digital cannot be "intelligent". i wonder what is the reason - age, education? is this what CS does to people? to me, for example, having a neural network in a piece of glass is genius.
Not that this thing isn't cool, but if my house has a window then is this AI too? I can now see through it and tell if there's a cat sitting on the lawn or not.
I might be wrong, but I believe the glass is the output of an AI program, trained by observing multiple manually corrected iterations of glass making, to arrange impurities while making glass, for object recognition. The glass and impurities themselves aren't the AI here, the program that can repeatedly be used in mass production to make object-recognizing glass, is the AI
This looks like any of the “build your first neural net” tutorials online that do the same task. The only difference is that training this one was a lot more work.
They use graphene to approximate the ReLUs. The device is a few microns on each side.
The 2D image is first converted to a 1D wavefront by some external means, then launched into the side of the device.
Training happens in digital computer, by simulating the behavior of the device.
Full paper: https://arxiv.org/pdf/1810.07815.pdf