Hacker News new | past | comments | ask | show | jobs | submit login

It's interesting to note why this was considered AI in 1952 and some may not consider it to be AI today. The AI was the search algorithm to find an effecient solution to the maze, not the mouse being able to navigate it later in a second run. The second run was just a demonstration of it having found the solution demonstrating it's intelligence. The actual intelligence was it's first run through the maze. Almost any configuration of the maze could be solved using algorithms like depth first, breadth first and a star search (didn't check which one the video demonstrates). Even though the algorithm was trivial it's ability to be applied to problems of today is still extraordinary. Nerural networks being equally trivial algorithms capable of remarkable things. Id argue this is as much AI today as it was back then, just more people know how Shannon performed this magic trick.



>> The AI was the search algorithm to find an effecient solution to the maze, not the mouse being able to navigate it later in a second run.

But that's not the whole story! The program can update its solution of the maze when the maze changes, but it is capable of only changing that part of the solution that has actually changed. When Shannon changes the maze and places Theseus in the modified part of the maze, I kind of rolled my eyes, sure that it was going to start a new search, all over again, but I was wrong: it searches until it finds where the unmodified part of the maze begins, then it continues on the path it learned before.

It seems that, in solving the maze, the program is building some kind of model of its world, that it can then manipulate with economy. For comparison, neural nets cannot update their models - when the world changes, a neural net can only train its model all over again, from scratch, just like I thought Theseus would start a whole new search when Shannon changed the maze. And neural nets can certainly not update parts of their models!

This demonstration looks primitive because everything is so old (a computer made with telephone relays!), but it's actually attacking problems that continue to tie AI systems of today into knots. It is certainly AI. And, in "early 1950's", it's AI avant la lettre.


Great observation. The solution to the update problem is relatively simple. It doesn't do a search again on update. Instead everytime it encounters an update in what it knows, it just changes the data stored in memory. All it is doing is updating its learned representation. After this it still knows what the other obstacles are without having to do DFS or BFS again. If the solution was a graph, it just deleted a edge it still knows what all the other edges are. If it encounters another change it updates the state of the graph again.

With regards to Neural Networks, if they are given a reward function, which can be dynamically evaluated (in this case did I reach the end or not) they are pretty good at learning without feedback.


You make it sound simple, but from my point of view the ability to update one's learned representation requires a representation that can withstand being updated. I mentioned John McCarthy's concept of "elaboration tolerance" in another comment, i.e. the ability of a representation to be modified easily. This was not a solved problem in McCarthy's time and it's not a solved problem today either (see my sibling comment about "catastrophic forgetting" in neural nets). For shannon's time it was definitely not a solved problem, perhaps not even a recognised problem. That's the 1950's we're talking about, yes? :)

Sorry, I didn't get what you mean about the dynamically evaluated reward function.


>For comparison, neural nets cannot update their models - when the world changes, a neural net can only train its model all over again, from scratch

I mean, sure they can. Training a neural network is literally nothing but the network's model being updated one batch of training examples at a time. You can stop, restart, extend or change the data at any point in the process. There's whole fields of transfer learning and online learning which extend that to updating a trained model with new data.

edit: Also in a way reinforcement learning where the model controls the future data it sees and updates itself on.


The problem I'm describing is formally known as "catastrophic forgetting". Quoting from wikipedia:

Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to completely and abruptly forget previously learned information upon learning new information.

https://en.wikipedia.org/wiki/Catastrophic_interference

Of course neural nets can update their weights as they are trained, but the problem is that weight updates are destructive: the new weights replace the old weights and the old state of the network cannot be recalled.

Transfer learning, online learning and (deep) reinforcement learning are as susceptible to this problem as any neural network techniques.

This is a widely recognised limitation of neural network systems, old and new, and overcomging it is an active area of research. Many approaches have been proposed over the years but it remains an open problem.


What is transfer learning if not partially updating the model...?


I always say that AI is a forever moving goal post. It is simply a task a human can do that you wouldn't expect a machine to be able to do. So as soon as a machine can do it, people no longer consider it intelligent (i.e. it is just A*, it is just a chess engine, it is just a network picking up on patches of texture, ..., it isn't really "intelligent").


This is because we originally thought "only a human would be able to play chess", "only a human would be able to drive a car". The thinking there is that if we were to solve these problems, we'd have to get closer to a true artificial intelligence (the kind that today we'd call "AGI" because "AI" doesn't mean anything anymore).

This line of thinking has been shown to be pretty faulty. We've come up with engines and algorithms that can play Go and Chess, but we aren't any closer to anything that resembles a general intelligence.


Well, GPT3 is definitely not a general intelligence, but I would say it's much closer than deep blue. Progress is happening! It's just a question of how far and fast we run with the goalposts.


Shannon did not use the word intelligence to describe the mouse in this demonstration - instead, he talked about learning. That's why the second run was considered more important than whatever algorithm was used to solve the maze.

To that end, I'm curious about their cache invalidation solution. Are there timestamps, or is it a flag system?


You are being far, far, far too generous with the complexity of this design if you think there's some kind of cache invalidation. It's a purely mechanical computer, which means it is going to be very simple in abstract design, because doing anything even mildly complex would require an insane amount of space.

I can't find design documents for this, but I can make a pretty educated guess about its design.

Each square has two relays, representing the number of left turns necessary to exit the square. Each time a whisker touches a wall, a signal is sent to a mechanical adder which will add 1 to the relays in the space. When the mouse enters a square, a "register" is set with a value, based on if it entered from the left, top, right, or bottom, then the mouse is turned and the register decremented until it hit 0, then the mouse attempts to walk in the indicated direction.

The maze ends up looking something like this:

    +-----+
    |0|1 1|
    +-- - +
    |1 3|0|
    + --- +
    |1 3|x|
    +-- --+
Where the mice starts on x and turns the number of times in each square. You can actually put the mouse down anywhere and it will exit the maze, if the walls are left unchanged.


If my memory serves me right, you are right. I think I've read that it was implemented with two relays per cell. These encode the last cardinal direction the mouse exited the cell in.


On the repeat run.. does the mouse always turn left or does it sometimes turn right ? I wasn't paying close attention.


> I'm curious about their cache invalidation solution

My guess: there would be a model somewhere (probably a binary relay map of walls) of the maze, and as soon as the mouse hits an inconsistency, this map is updated. So there isn't really a cache, it's more like a model, or perhaps you can think of collision-based cache (model) invalidation. The mouse probably then follows the solution to this modified maze, modified only insofar as it has measured modifications.

Is there a technical specification somewhere? I'd certainly be curious to read it.


A star search as we know it wasn't developed until the mid 60s.


The term A.I. was coined four years later in 1956. But an earlier term cybernetics encompassed some aspects of A.I.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: