Blimey, this is really stretching the definition of AI, surely? It’s a piece of glass that has been designed by a lot of trial and error to perform one specific task. It sounds like humans were analysing the fitness and making the modifications. I wouldn’t call it “unpowered AI” any more than I’d call a coin sorter that.
Because it uses glas as a substrate and light as a information carrier?
Most of what we call “AI” today also uses prelearned weights for their neural networks and in many use cases these weights are not touched after deployment.
I don’t see why a neural network encoded in glas should not be an AI while the same neural network on a computer is one — either you have to call both AI or neither.
Most of what we call AI can learn and update to an extent, in order to be able to match a wider verity if I puts or to improve accuracy on existing inputs, this is a hardcoded solution. If this counts as AI, then every bit of software ever written also counts as AI, which makes the term even more meaningless and marketing buzzwordy than it already is.
I suppose the question is, if an AI has learned and you export that final learned state to use in a now-hardcoded classifier, is that classifier still AI (or part of the overall AI) or is it simply the output? I can imagine arguments on both sides. If you accept that as AI, then sure, this fits the bill!
> Most of what we call AI can learn and update to an extent
Most of what we call AI are hardcoded solutions once in production. There may be ongoing offline improvements being made, but once the improvements are established the production AI is replaced with a point-in-time snapshot of the AI undergoing offline training. Self-learning in production causes all kinds of problems, but most significantly it's a security issue since it gives an attacker the ability to manipulate behavior by curating examples.
It's comical how little people understand about machine learning, no one calls an ODE solver artificial intelligence but gradient descent on an interesting equation is somehow now A.I.
Why does this system have to be hard coded? Certainly you could auotmate the glass fabrication technique with a computer and robotics – I'm just imaginibg that was beyond the scope of this study.
An equivalent in neural nets would be some model trained and then after the results are satisfactory, burned into an FPGA, that FPGA is not AI and so is this glass. Or both are.
At least the neural networks were at some point self-assessing and self-modifying, and plausibly could be said to “learn” something. Here it seems more plausible to say that the humans learned what structure to produce than that the glass did.
But you’re right, I think many “AIs” shouldn’t really be named that either!
They didn't manually adjust the glass until it worked (which would be infeasible), they wrote a differentiable simulator and used it to determine the material to use at each point via gradient descent, which is quite a feat.
That's exactly as self-assessing and self-modifying as a neural network implemented using any other kind of computation substrate.
I skimmed the paper linked there. They did use a digital model for glass-impurity substrate, to adjust the location of the impurities. This doesn’t sound much different than training with activation weight propagation — except, here one can literally see those weights. I don’t see why it wouldn’t fit usual definition of a neural network.
It may have been designed using AI, but it's not AI.
The impurities are not the weights, they are the output of the design software. It is the design software that has been learning something, not the pane of glass.
It is like using AI to design, say, the most aerodynamic plane.
Only here they used an AI to design something that performs a task that we traditionally use as a benchmark for AI models. But this piece of glass, just like the plane mentionned above, is not learning anything and it's not an AI.
Thanks for giving this analogy, It made more sense than my imagination above.
If I understand correctly, it’s the design process, not glass, that used learning. Along the same analogy, I guess the sculpture in London (glass, here), which was designed using random walk (neural nets), would be the same: the sculpture in itself isn’t “random walk”, but the design process was.
Edit: I read the other comments and it’s getting more confusing! AI, from my school courses, would be implementation of algorithms like Hill climbing where a system is online: it takes some input, and tries its best to find a solution. Now if I take the output itself for use in, say, signal processing — that “output” would be a “device” to do something and won’t be an “AI device”. Does this make any sense at all? I’d love to get some pointers on this to read.
It hardly passed the litmus paper test for what we class as AI. Though that said, by their definition, litmus paper is equally AI as it takes an input and produces an output.
Just seems a bit like being told you can buy a flying car, only to find out that it's a short flight time, not flying very high and depends upon approaching a ramp at speed.
> The AI works by taking advantage of small bubbles deliberately incorporated into the glass, as well as embedded impurities such as graphene. As the light waves reflected off the handwritten numbers pass through the glass AI, they are bent in specific ways by the bubbles and impurities. This bending process focuses the light onto one of ten specific spots on the other side of the glass, depending on the number presented. (...)
> Training the AI involved presenting the glass with different images of handwritten numbers. If the light did not pass through the glass to focus on the correct spot, the team adjusted the size and locations of impurities slightly. After thousands of iterations, the glass “learned” to bend the light to the right place. (...)
This sounds pretty much like they're training a neural network.
I agree, but then again how is this different from using pretty much the same mechanics but in software form? Most “AI” that is used in consumer apps uses pre-made models, that are then applied to input data. This seems very similar.
It's not different, that's the point. We already know that these "mechanics" are very powerful, and now this paper shows that we can get results at nearly the speed of light.
There are two different kinds of neural networks. Large models are fed lots of data to train them. But when the training is done, the results can be closely approximated by a much smaller network. The smaller networks can be distributed more easily and even used on phones. But they are fixed, not used for training anymore.
It's definitely AI - it's still recognizing a handwritten digit. But not all AI applications require further training. Sometimes you just need the end result.
The issue is what is defined as "AI". From [1], some of those definitions include
- Thinking Humanly - the ability to problem solve, introspection, and learning (Cognitive Science)
- Thinking Rationally - the ability to perceive, reason, and act (Logic)
- Acting Humanly - simply do things that (at the moment) humans do better (Turing Machines)
- Acting Rationally - the design of intelligent agents (the focus of the textbook, Betty's Brain[2])
Yes, they all vaguely sound the same, but the point is, if you took the glass and had to select which definition it marks off, which would it be? My point is what about this glass makes it "intelligent"? The end of the article starts to talk about a combination of AI-glasses could form some sort of efficient image recognition process. Since we still don't have a clear definition of what intelligence is, is it simply a combination of tiny little perceptrons (from neural networks) that have specific differentiation tasks?
What people call “AI” is actually a long historical process of crystallizing collective behavior, personal data, and individual labor into privatized algorithms that are used for the automation of complex tasks: from driving to translation, from object recognition to music composition. Just as much as the machines of the industrial age grew out of experimentation, know-how, and the labor of skilled workers, engineers, and craftsmen, the statistical models of AI grow out of the data produced by collective intelligence.
I believe the definitions of “intelligent” and “adaptive” are intertwined. I’m not sure what the distinction is between those two words, but I know they are connected.
Because this system can’t adapt, I agree it is probably not meaningfully “intelligent”.
Agree, I would say its more like a logic-tree or more like an optical circuit. Even academics are choosing hot names for papers, basically in CS today, if your paper does not have AI in it, good look getting attention.