Hacker News new | past | comments | ask | show | jobs | submit login

Blimey, this is really stretching the definition of AI, surely? It’s a piece of glass that has been designed by a lot of trial and error to perform one specific task. It sounds like humans were analysing the fitness and making the modifications. I wouldn’t call it “unpowered AI” any more than I’d call a coin sorter that.



Because it uses glas as a substrate and light as a information carrier?

Most of what we call “AI” today also uses prelearned weights for their neural networks and in many use cases these weights are not touched after deployment.

I don’t see why a neural network encoded in glas should not be an AI while the same neural network on a computer is one — either you have to call both AI or neither.


Most of what we call AI can learn and update to an extent, in order to be able to match a wider verity if I puts or to improve accuracy on existing inputs, this is a hardcoded solution. If this counts as AI, then every bit of software ever written also counts as AI, which makes the term even more meaningless and marketing buzzwordy than it already is.

I suppose the question is, if an AI has learned and you export that final learned state to use in a now-hardcoded classifier, is that classifier still AI (or part of the overall AI) or is it simply the output? I can imagine arguments on both sides. If you accept that as AI, then sure, this fits the bill!


> Most of what we call AI can learn and update to an extent

Most of what we call AI are hardcoded solutions once in production. There may be ongoing offline improvements being made, but once the improvements are established the production AI is replaced with a point-in-time snapshot of the AI undergoing offline training. Self-learning in production causes all kinds of problems, but most significantly it's a security issue since it gives an attacker the ability to manipulate behavior by curating examples.


The AI generates the solutions. The solutions are not AI.


AI is gradient descent?


It's comical how little people understand about machine learning, no one calls an ODE solver artificial intelligence but gradient descent on an interesting equation is somehow now A.I.


Machine learning is just the lay-persons statistics, or something to that effect.


Why does this system have to be hard coded? Certainly you could auotmate the glass fabrication technique with a computer and robotics – I'm just imaginibg that was beyond the scope of this study.


An equivalent in neural nets would be some model trained and then after the results are satisfactory, burned into an FPGA, that FPGA is not AI and so is this glass. Or both are.


At least the neural networks were at some point self-assessing and self-modifying, and plausibly could be said to “learn” something. Here it seems more plausible to say that the humans learned what structure to produce than that the glass did.

But you’re right, I think many “AIs” shouldn’t really be named that either!


They didn't manually adjust the glass until it worked (which would be infeasible), they wrote a differentiable simulator and used it to determine the material to use at each point via gradient descent, which is quite a feat.

That's exactly as self-assessing and self-modifying as a neural network implemented using any other kind of computation substrate.


I skimmed the paper linked there. They did use a digital model for glass-impurity substrate, to adjust the location of the impurities. This doesn’t sound much different than training with activation weight propagation — except, here one can literally see those weights. I don’t see why it wouldn’t fit usual definition of a neural network.


It may have been designed using AI, but it's not AI. The impurities are not the weights, they are the output of the design software. It is the design software that has been learning something, not the pane of glass.

It is like using AI to design, say, the most aerodynamic plane. Only here they used an AI to design something that performs a task that we traditionally use as a benchmark for AI models. But this piece of glass, just like the plane mentionned above, is not learning anything and it's not an AI.


Thanks for giving this analogy, It made more sense than my imagination above.

If I understand correctly, it’s the design process, not glass, that used learning. Along the same analogy, I guess the sculpture in London (glass, here), which was designed using random walk (neural nets), would be the same: the sculpture in itself isn’t “random walk”, but the design process was.

(I couldn’t recall what was the name of the sculpture. Here’s the wiki link: https://en.m.wikipedia.org/wiki/Quantum_Cloud )

Edit: I read the other comments and it’s getting more confusing! AI, from my school courses, would be implementation of algorithms like Hill climbing where a system is online: it takes some input, and tries its best to find a solution. Now if I take the output itself for use in, say, signal processing — that “output” would be a “device” to do something and won’t be an “AI device”. Does this make any sense at all? I’d love to get some pointers on this to read.


Out of curiosity - did you ever work with neural networks? (as in - the algorithms, not the high level abstractions?)


Yes, pretty much every day.


> either you have to call both AI or neither

FWIW having worked with some serious ML heads at Google and Facebook in the past, none of them referred to ML as AI.


> It’s a piece of glass that has been designed by a lot of trial and error to perform one specific task

And you're a peace of matter that has been designed by a lot of trial and error to perform one specific task: to propagate your species.


Except we do it intelligently and the glass doesn't.

The parent is not saying the glass is useless, just that it isn't intelligent


"glass isn't intelligent"

and silicon is?


hint - material is not relevant, what it does is relevant.


It hardly passed the litmus paper test for what we class as AI. Though that said, by their definition, litmus paper is equally AI as it takes an input and produces an output.

Just seems a bit like being told you can buy a flying car, only to find out that it's a short flight time, not flying very high and depends upon approaching a ramp at speed.


How it's different from an FPGA that has some classifier encoded? It's just all "fiber optic" instead of electricity.


This is faster and it doesn't use electricity.


I should mention, people have been using lenses to approximate FFTs for 150 years.


> The AI works by taking advantage of small bubbles deliberately incorporated into the glass, as well as embedded impurities such as graphene. As the light waves reflected off the handwritten numbers pass through the glass AI, they are bent in specific ways by the bubbles and impurities. This bending process focuses the light onto one of ten specific spots on the other side of the glass, depending on the number presented. (...)

> Training the AI involved presenting the glass with different images of handwritten numbers. If the light did not pass through the glass to focus on the correct spot, the team adjusted the size and locations of impurities slightly. After thousands of iterations, the glass “learned” to bend the light to the right place. (...)

This sounds pretty much like they're training a neural network.


I agree, but then again how is this different from using pretty much the same mechanics but in software form? Most “AI” that is used in consumer apps uses pre-made models, that are then applied to input data. This seems very similar.


It's not different, that's the point. We already know that these "mechanics" are very powerful, and now this paper shows that we can get results at nearly the speed of light.


Speed of light, and using analog form of computation.

This single piece of glass can do as many visual recognitions as physics allows, using no energy (light aside).


There are two different kinds of neural networks. Large models are fed lots of data to train them. But when the training is done, the results can be closely approximated by a much smaller network. The smaller networks can be distributed more easily and even used on phones. But they are fixed, not used for training anymore.

It's definitely AI - it's still recognizing a handwritten digit. But not all AI applications require further training. Sometimes you just need the end result.


The issue is what is defined as "AI". From [1], some of those definitions include

- Thinking Humanly - the ability to problem solve, introspection, and learning (Cognitive Science)

- Thinking Rationally - the ability to perceive, reason, and act (Logic)

- Acting Humanly - simply do things that (at the moment) humans do better (Turing Machines)

- Acting Rationally - the design of intelligent agents (the focus of the textbook, Betty's Brain[2])

Yes, they all vaguely sound the same, but the point is, if you took the glass and had to select which definition it marks off, which would it be? My point is what about this glass makes it "intelligent"? The end of the article starts to talk about a combination of AI-glasses could form some sort of efficient image recognition process. Since we still don't have a clear definition of what intelligence is, is it simply a combination of tiny little perceptrons (from neural networks) that have specific differentiation tasks?

[1] http://aima.cs.berkeley.edu/

[2] https://wp0.vanderbilt.edu/oele/bettys-brain/


What people call “AI” is actually a long historical process of crystallizing collective behavior, personal data, and individual labor into privatized algorithms that are used for the automation of complex tasks: from driving to translation, from object recognition to music composition. Just as much as the machines of the industrial age grew out of experimentation, know-how, and the labor of skilled workers, engineers, and craftsmen, the statistical models of AI grow out of the data produced by collective intelligence.

https://www.e-flux.com/journal/101/273221/three-thousand-yea...

https://news.ycombinator.com/item?id=20249150


Coin sorter analogy seems apt


Haha. You just basically described machine learning to a tee


This was my response to the article too. 'This isn't AI... uhhh... oh but it is machine learning... ugh'


If you take "AI" to be a marketing buzzword for machine learning, it is AI in the sense that is a passive inference technology.


I believe the definitions of “intelligent” and “adaptive” are intertwined. I’m not sure what the distinction is between those two words, but I know they are connected.

Because this system can’t adapt, I agree it is probably not meaningfully “intelligent”.


Agree, I would say its more like a logic-tree or more like an optical circuit. Even academics are choosing hot names for papers, basically in CS today, if your paper does not have AI in it, good look getting attention.

source, me: Stanford CS grad.


This is a feed-forward neural network implemented in glass.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: