Hacker News new | past | comments | ask | show | jobs | submit login
MIT professor Marvin Minsky wins $540,000 award (bostonglobe.com)
130 points by wslh on March 23, 2014 | hide | past | favorite | 109 comments



I still remember a story I heard Gerry Sussman tell about Marvin, from when Gerry was a grad student. Gerry was programming a neural net (or something like it; I forget exactly what) and told Marvin he was planning to use a random number generator to initialize the weights in the network "so it won't have any biases". Marvin replied, in his careful, laconic way, "Of course it will have biases. You just won't know what they are."

This comment has never seemed as stunningly insightful to me as it evidently did to Gerry, but it's probably one of those things that is obvious once stated, but not before.

Anyway I'm glad to see Marvin winning this prize.


The story is in "The Hacker Koan:" http://en.wikipedia.org/wiki/Hacker_koan


You know, I just now finally understood what was going on there: a randomized map matches no territory. Enlightenment achieved.


But it does match a territory. It's just a random one


Ok, let me put it this way: if there happens to exist some territory matched by your randomly-generated map, you still have the problem that a randomly-generated map was not in any way caused by the territory. Of course, that's just unpacking the word "match".


Let's suppose you're lost in the Amazon. Would you not prefer a professionally-drawn map of the Amazon over a "map" I whimsically scribbled while blindfolded? After all, my map might not correspond to the Amazon, but it must correspond to something.


I'm not sure if you are responding the the right comment or if you understood what I said. The point is when you initialize weights randomly, you aren't starting with a blank map and then adding stuff to it. You are starting with a random map and then trying to correct it. Random bias is still bias.


My bad. The way I interpreted your comment was

> But (a random map is useful because) it does match a territory (somewhere out in the actual universe), it's just a (location that's hard to find).

I don't know what I was thinking.


Not necessarily. See also: "when the map and the terrain differ, trust the terrain."


I've seen some suggestions that the human brain is full of random pattern generators, and that some kinds of learning are selecting which pre-existing patterns most reliably match the external stimuli


From Wikipedia: "Isaac Asimov described Minsky as one of only two people he would admit were more intelligent than he was, the other being Carl Sagan."


“I don’t know what they think I do,” Minsky said. “I make up theories of how the mind works and when I’m lucky enough, I have some students who make careers out of that.”

As a guy with a lab, the idea of thinking of how the mind works from time to time and then having students make a career of that idea is fairly humbling.


It's not exactly the whole story. Even in the '60s and '70s the AI labs faced big battles to retain their funding, and IIRC Minsky has a reputation as a pretty rough rider in those struggles.


My favourite minsky story is the one where he commissioned a grad student for a summer to solve "the computer vision problem once and for all".

Its a great anecdote to illustrate how so many have underestimated the difficulty of problems in computer vision in robotics and AI.


I guess this was before Moravec's paradox was "discovered" http://en.wikipedia.org/wiki/Moravec%27s_paradox


Minsky was one of the ones who formulated Moravec's paradox, and the difficulty of computer vision probably played a big role.


it was undergrads. and there were six of them, to be fair.


Random Marvin Minsky story: About 15 years ago I invited Guy Kawasaki to speak to a student group at MIT and he accepted. I was able to schedule in about an hour to tour him through the MIT Media Lab, in the evening. He and I were wandering from area to are when we came upon Minsky, working in a lab. Being a former CS undergrad from a different school, I was pretty excited, so I decided to go for it and introduce myself and Guy to this famous person. Guy said, "Wow - you're the Marvin Minsky?" Without batting an eye, Minsky said, "Wow - you're the Guy Kawasaki?" And then proceeded to spend ten minutes walking us through what he was working on.


I am sure there are a lot of (under)graduate research programs that could use that money better, than some eminent professor at the end of his days. Not that the recognition and praise is undeserved, just the money could of been better utilized imho.


Completely agree. If I were trying to inspire research through monetary rewards, I wouldn't be giving them out to someone in Minsky's position, regardless of how deserving. I'd look instead to recognize someone up-and-coming who could use that to further their career and inspire others to follow in their footsteps. Minsk is already a legend.


This article is a few months old: January 15th, 2014.


I see so many references to Marvin Minsky in the texts and books I read. Glad to see him recognized like this!


Idiocy:

My thermostat has three possible beliefs: 1) It is too warm in here. 2) It is too cold in here. 3) It is just about right.

Minsky is wrong about AI in the same way that McCarthy was wrong. McCarthy really "believed" that his thermostat had three possible beliefs, and that his thermostat really "believed" one of them at any given time. http://Books.Google.com/books?id=yNJN-_jznw4C&pg=PA30&lpg=PA...

Searle debunked him, but the message doesn't seem to have gotten out. We are still wasting money on AI.

We are wasting money on AI because we are following the materialist hypothesis: that there is nothing in the universe besides matter and energy, and the interactions between matter and energy. To reject this hypothesis is unthinkable for many, even for most, because the only alternative hypothesis would be that some sort of non-material (spiritual?) stuff must exist.

But the evidence is overwhelming. The evidence cannot be denied.

Linguistics, for example, has always been divided into syntax and semantics. No linguist has ever challenged this taxonomy. Both syntax and semantics are very real.

Computers are syntactic engines. They do syntax. They can only do syntax. It matters if a symbol is present, or not, and it matters in what order symbols are arranged. But the computer does not, and indeed cannot, associate any meaning (semantics) with any symbol. The only way a computer, being only a syntactical engine, can appear to do semantics, is if a human has first been clever enough to have found a mapping in some natural language between syntax and semantics [and such mapping must exist in the first place, for him to find it], and then clever enough to exploit it. The computer is still doing only syntax, even while appearing to do semantics.

Searle showed this also with his Chinese Room analogy. But the "cognitive scientists" have not been paying attention. Or they are still in denial.

But humans really do semantics. Nobody questions this, or challenges it, because it is self-evident. You are doing semantics right now, as you read my comment.

Because humans really do semantics, and computers cannot, humans and computers must be fundamentally different sorts of creatures. The idea that the human mind is software, running on the hardware of a human brain, must necessarily be false. (If it was true, then humans couldn't do semantics either, but they do!)

If this wasn't enough, Nagel (of "What is it like to be a bat?" fame) has shown that the materialistic hypothesis is almost certainly wrong, in his recent book "Mind and Cosmos". http://www.Amazon.com/Mind-Cosmos-Materialist-Neo-Darwinian-...

But the world pays no attention to Nagel either. To do so would be to have a Kuhnian revolution of epic proportions, and that is not "scientifically correct".

So the cognitive scientists, the AI researchers, the biologists, and pretty much everybody in science today, toe the politically correct line. They celebrate Minsky.

They ought to be bringing up the hard questions. That is what real scientists do.

It is easier to be an idiot, because that doesn't put your funding in jeopardy.

I conclude that there are very few "real" scientists. Cue the "no true Scotsman" jokes. But deal with the issue I've raised. Be intellectually honest.


Are you seriously bringing up Searle's Chinese Room as an argument against AI research? According to Wikipedia, "The Chinese room argument is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their fields." Right on.

First off, philosophy of mind is 100% irrelevant to modern AI research, which is more concerned with creating algorithms that act as if at a human level of intelligence than creating algorithms that recreate human states of mind. You guys might not get that, but every person working on AI does.

Even given that, it's allowing for a very charitable interpretation of Searle's "work": most of us consider Searle to be a fucking idiot at best, a troll in the most likely case. The Chinese Room analogy is tortured, and pretty much assumes dualism from the start - to me, if a dude in a closet pushing papers around could fake understanding Chinese as far as an outside observer is concerned, we'd have solved strong AI, so I don't care whether Searle thinks we've succeeded or not.

Re: Nagel, I don't know his stuff, but having read http://en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3F, I'm not too interested, there's so much vagueness there that I feel like this is just more bullshit questioning whether we have achieved "real" understanding or just a mechanical approximation. And again, I don't care. I want a program that acts as if it's intelligent, and it needs to pass most normal people's bar for intelligence, not some dipshit philosopher's bar for being human.

Philosophers have always been misinterpreting AI research's goals, which is why nobody in AI has ever paid them any attention, and which is also why they'll never be relevant to anything. Even if they're right, they're not asking questions that anyone cares about.


I like this deconstruction of Searle's Chinese Room, by Scott Aaronson. He calls Searle's argument a "non-insight": http://www.scottaaronson.com/democritus/lec4.html


That link was the most interesting thing I've read on HN all day. Thank you!

My favorite quote: "As a historical remark, it's interesting that the possibility of thinking machines isn't something that occurred to people gradually, after they'd already been using computers for decades. Instead it occurred to them immediately, the minute they started talking about computers themselves. People like Leibniz and Babbage and Lovelace and Turing and von Neumann understood from the beginning that a computer wouldn't just be another steam engine or toaster -- that, because of the property of universality (whether or not they called it that), it's difficult even to talk about computers without also talking about ourselves."


Sure thing. I should have mentioned that you don't need to read the whole thing (Searle's Chinese room argument is dicussed in only one section of the linked lecture).

You'll probably enjoy his other material. Scott Aaronson is a prolific expositor, he's been blogging for nearly a decade (since before he was hired by MIT's CS dept).


Searle's primary contribution to philosophy is that he forces every CS student to ponder this important question: "Is this famous philosopher correct, that AI is impossible, or does a nobody like me actually understand the concept of emergence better than he does?


> Philosophers have always been misinterpreting AI research's goals, which is why nobody in AI has ever paid them any attention, and which is also why they'll never be relevant to anything. Even if they're right, they're not asking questions that anyone cares about.

That's totally unfair. Serious philosophers object to the misapplication of AI research to answer philosophical questions about the mind (not necessarily even by AI researchers), not AI in general. It's basically the the same complaint you have against Searle being invoked against AI. Don't sink to the level of the person you're replying to with mindless tribalism. Your definition of 'anyone' appears to be AI researchers.

Fwiw, although Searle made it onto the undergraduate phil mind courses, he isn't really taken that seriously by contemporary philosophers.


That's fair - I'm biased by having too many conversations with self professed philosophers telling me that any push towards AGI is wasted effort because of X, where X just means that it wouldn't satisfy whatever they think is special about humans.

I don't think AI researchers have anything to offer philosophy. The thing is, AI researchers rarely engage at all except when philosophers pop up and tell them that what they're doing is impossible. AI researchers generally don't give a shit about philosophy, whereas there is a ton of noise coming from the other direction.

You may be right in your implicit suggestion that the people bringing up Searle are really just amateurs, though. I don't ever recall anyone with bona fide credentials in philosophy ever mentioning the guy as anything more than a sad amusement...


Philosophy often borrows examples from other fields in order to provide concrete examples of quite abstract ideas.

Unfortunately, this often gets misunderstood (both by practitioners of those fields and people with an axe to grind) as being critical of the field. The criticism is usually really directed at another philosophical position.

I think the reason the Chinese Room argument gets so much attention is that it's an argument against a position that was popular in the 1970s --- that mental states are identical (as in, strict identity) to classical computational states --- while being easy to understand and criticise. As you say, it assumes its own conclusion.

To be fair to Searle, I shoul point out that while the chinese room argument isn't taken seriously, he did other unrelated work that is still relevant!


I care about this question. I think that 'creating algorithms that act as if at a human level of intelligence than creating algorithms that recreate human states of mind' is part of the problem, and acts as a limitation on our AI. I am actively interested in AI that has a sense of self and is capable of developing preferences rather than mere opinions.

That doesn't mean I buy Searle's argument, and indeed have refuted it below. But I do think that dismissing the questions asked by philosophy out of hand is a mistake, and is causing us to overlook opportunities. The most likely place for general AI to develop is on mobile devices - not in a her type human fashion, though.


> The computer is still doing only syntax, even while appearing to do semantics.

Why do you assume people are not doing the same?

> But humans really do semantics. Nobody questions this, or challenges it, because it is self-evident.

It is no more self evident than your claim that computers don't do semantics.

Your argumentation is circular:

IF computers can't do semantics, and IF humans do, then clearly the humans and computers must be fundamentally different. But your claim that computers can't do semantics (as opposed to, a claim that we don't yet know how to program computers to do semantics) is assuming that humans and computers are fundamentally different - if we are not, and the materalist hypothesis holds, there is simply no basis for assuming that we are fundamentally different (as categories; clearly current computers are not structured as human brains).

> It is easier to be an idiot, because that doesn't put your funding in jeopardy.

Less name calling, and more of an argument that isn't resting on a logical flaw might be preferable.


You apparently don't understand how computers work.

There is no "meaning" associated with any variable or its value, and there cannot be.


> You apparently don't understand how computers work.

You appears to like to jump to conclusions.

> There is no "meaning" associated with any variable or its value, and there cannot be.

Please give me a definition of "meaning" that can be applied to a human brain and not to a computer. And what is the evidence that it can not be applied to a computer?

You are also jumping to the conclusion that, if as you claim, the brain depends on something that can't be explained under the materialistic hypothesis, that a computer cannot, yet you have not even presented an argument for why that would be so.


> Please give me a definition of "meaning" that can be applied to a human brain and not to a computer.

You associate a meaning to the symbol "dog". You think of an animal that barks, wags its tail, chases cats and squirrels, and is happy to see you when you get home. You associate something in the real world what that symbol. (That is the very meaning of doing semantics.)

The computer does nothing of the sort with the symbol "dog".


Please provide a definition of "associate a meaning to the symbol" that can be applied to a human brain and not a computer.

The plain reading of what you've written above is so trivially simple to implement in a computer that it is covered in every introductory algorithm course, so I presume you have either given a definition of "meaning" that is overly simplistic, or have a definition of "associate a meaning" that is substantially more complicated than a plain reading of the words.


You just taught us Object Oriented Programming 101.


I remember when I was a kid, imagining how the inventor of English taught everyone else... "'Dog' means-- wait. 'Means' means-- uh-oh."


What is this thing we call "meaning"? Can it be a point in a vast and complex "pattern space" and the meaning of that point is the structure of all the "paths" that map that point to other points in this same pattern space?

Our act of arbitrary categorization of patterns into "syntax" and "semantics" just seems to obscure that fact that they are just two arbitrary encodings of patterns.

How can we know that one pattern is equal to another pattern in a different encoding? Don't we agree on these mappings with other pattern spaces (read humans)? As we communicate, so do our pattern spaces start interacting, and all we can hope for is a convergence on the mappings sans encoding.


> Can it be a point in a vast and complex "pattern space" and the meaning of that point is the structure of all the "paths" that map that point to other points in this same pattern space?

That's the exact point of contention, whether semantics can be represented by syntax is unknown currently, though it must hold in a materialistic world. If it can't, as some believe, then it isn't an arbitrary categorization.


Isn't the only issue here that we have here one set of spaces (brains) with that structural understanding (link/edge), and another set without that structural understanding?

It seems that the brains that possess the link are busy implementing it's isomorphic structure in technology, while the brains that do not possess the link are contributing nothing as they are still in a more "primitive" state? (Primitive meaning that they lack the linkage to see that the terms are really structurally isomorphic on the grand scale of things)

It would be interesting to know what input those brains that do not possess the link require to start possessing it.

Can there exist brains that will never make the link?


Until there's evidence to the contrary, the materialist world is the only world there is, you can simply call it the world as calling it the materialist world is redundant.

Materialism has ample evidence to support it; dualism has no good evidence, it is therefore dualism that is on trial, not materialism.


This is the part you challenged? We're skipping right past the part where vidarh seemed to assert p-zombiehood? ;)


Searle is ignored by AI researchers because it has no relevance--it has been thoroughly debunked.

The question of whether the man in the chinese room understands chinese is the wrong question. Someone reading symbols from outside of the room is not interacting with the man, they are interacting with the symbols--the algorithm. The algorithm itself does understand chinese! This algorithm is made real, it is reified by the actions of the man in the room to create and sustain it. The algorithm exists on an entirely new layer of abstraction from the man. So while it is true that the man does not understand chinese, the algorithm itself demonstrates every possible requirement for understanding--semantics as you would put it.

A human is analogous to the algorithm and the man shuffling papers is analogous to our neurons. Of course our neurons do not "understand" what they are processing, but as a whole our neurons create an entity that itself has full understanding.

Semantics do in fact come from symbols. When I enter my password into my computer and it uses that password and allows me access to some resource as a consequence, that is semantics. The password string exists in a specific context within the computer system, from which it derives meaning. The difference between the computer system and a person is that the semantic meaning is not generic and integrated. The password exists within the semantic context of granting or revoking access to a resource. It can only exist here as there is no mechanism in standard programming languages for that semantic meaning to be integrated with other parts of the system (without explicitly programming every single case). The brain on the other hand has generic semantics that effortlessly allows integration of different kinds of semantics into a single whole. This is the only meaningful difference.


http://discovermagazine.com/2011/nov/12-out-there-mysterious...

> We know a lot about the physics of the macroscopic world, but can we be sure that we aren’t missing one of those crucial ingredients? The answer is yes: In certain well-defined cases, we can be very sure. [...] And while there may be unknown forces waiting to be discovered, we can say with great confidence that such forces must be so feeble that only a professional physicist like me would really care.

Not only the _detectable_ universe is entirely made of matter and energy (which is not a hypothesis: it's a tautology, if you think about it), but there isn't a single shred of evidence that our brain employs anything beyond chemicals and mundane electromagnetic forces.

"Kuhnian revolution" happens when the evidences mount up to the point the old paradigm cannot explain them away without great contortion. What we have here instead is a very confused semantic hair-splitting (no pun intended) about whether computers can do "semantics".

The modern materialistic science is supported by vast evidence and very alive and well, thank you very much.


Read, and then critique, Nagle's recent book: http://www.Amazon.com/Mind-Cosmos-Materialist-Neo-Darwinian-...

Please come back and post your response here, or start a new Hacker-News story.


I'll take it as you don't have any evidence that human brain requires anything other than electromagnetism, then.


Au contraire.

Since electromagnetism cannot cause intentions, or goals, and since the human obviously has both intentions and goals (teleology), it is clear that the human brain does encompass something in addition to electromagnetism. (The same statement applies to all of the chemical processes involved in the human brain.)

Please read Nagle's work. Read Searle's work. Read all of the philosophical literature on "intention".


> Since electromagnetism cannot cause intentions, or goals, and since the human obviously has both intentions and goals (teleology), it is clear that the human brain does encompass something in addition to electromagnetism. [emphasis added]

Let's extend this to include chemical reactions (since we know that it's more than just wires and currents in our skulls), and my question becomes: what evidence do you have that this is true?

A few hundred year ago if you had claimed the sun was illuminated by the energy from the massive number of fusion reactions occurring under its surface, no one would have believed you, not least because there was no language at the time to even discuss the physical mechanisms that actually occur. At best you may have been able to persuade people that it was a giant flame, akin to the fire and candle they knew. That doesn't mean that there is some other supernatural thing going on, just that the natural goings on weren't understood.


> Since electromagnetism cannot cause intentions, or goals, and since the human obviously has both intentions and goals (teleology)

You're begging the question here. If the world is materialistic, then physical processes can cause intentions, hence this claim is false if your conclusion is false.


No, I'm not begging the question. In another response, I readily admit that if materialism is true, then AI is a foregone conclusion.

Please prove that mere electromagnetism can cause intentions, goals, and teleology. Limit yourself to Maxwell's equations, and the consequences thereof.

I look forward to your response, but I am not hopeful that you will provide a cogent answer.


> In another response, I readily admit that if materialism is true, then AI is a foregone conclusion.

Of course, that is a meaningless admission when you repeatedly claim that it is proven that materialism is false elsewhere.

I do not need to prove anything. I've already explained why your claims are unproven and why your arguments does not support your conclusions unless your stated conclusions are already true.

I am not the one repeatedly making strong assertion of fact about controversial claims. The only thing I've made strong claims about is the logical validity of some of your arguments.

You might note that I've carefully tried to avoid making strong assertions at all in this matter. While I default to the materialistic hypothesis, I do so in the absence of evidence of anything else. Because of this default assumption, I by extension assume that human level intelligent AIs - and above - will eventually happen.

Those assumptions are as far as I'll go with respect to making claims about it, and there's nothing there that demands a proof, since I've not claimed they're proven, unless - given the context of this discussion - you want me to prove I actually hold those assumptions, rather than being some automaton that is somehow only capable of syntax.


If you had told someone in year 1800 "computers are impossible, just go ahead and build one", said person wouldn't have been able to build one but you'd still be wrong. You can't just casually ask this in an argument about the theoretical possibility of AI.


>... since the human obviously has both intentions and goals ...

You're just arguing past people.

Humans obviously do this, computers obviously don't, therefore computers cannot do what humans do.

There's nothing "obvious" about this. It's a completely subjective interpretation. How do you respond to people who say humans obviously do act precisely like a very-complex computer?


In fact one of the reasons I bothered answering him on this, is that the more I learn about how humans respond, the less I am inclined to consider the possibility that we're not machines, and I find it extremely fascinating how someone can feel so sure we're not.

More and more, I am coming to terms with seeing humans as far simpler machines in many respects than what we'd like to think, on the basis of how many response patterns appears to be largely "hard-wired" and require little to no higher thought processes, even though we often make up elaborate explanations after the fact if challenged (as can be demonstrated by asking people why they did <insert random thing that the person in fact did not do, but wouldn't remember conclusively they didn't>)


I suspect people can "feel so sure" because they can feel, and see no evidence that computers can.

But that's a dangerous edge to walk regardless. Humans feel, computers don't - most would agree. Humans feel, animals don't - many would agree, many would not, many would draw a finer line between e.g. mammals and others. Humans feel, inferior-race-X does not (frequently argued about slaves) - many would disagree, but obviously many do think this is true.

For myself: I've seen nowhere near conclusive proof either way, and continually-increasing evidence in favor of human == machine.


> Since electromagnetism cannot cause intentions, or goals

Says who, where's the evidence of this? This is a bold claim and an unproven one. As it's your primary axiom, the discussion can go no further until this fantastic claim has some evidence.


> Since electromagnetism cannot cause intentions, or goals

Yes it can. Source: my brain.


I applaud your clever replacement of the now trite "paradigm shift" with "Kuhnian".

I haven't read the latest from Nagel so can't speak to that, but Searle's arguments work against only the most simplistic approaches to symbolic AI (which is no longer a very common framework) and his continued willful misunderstanding of how people in the field tend to think about cognition betray a lot of essentialism on the topic. Searle's attacks may stand against a naive form of functionalism but no one cares that "Watson doesn't know it won".

Added: I think non-materialist thinking hasn't caught on in most fields of scientific endeavors because holding non-materialistic views in no way advances those occupations. It isn't the funding, it's that no one has come up with a way that having dualistic approaches lets you generate better testable hypotheses! If they did, the funding would come.

I'm not sure if you're trolling, I'm mostly responding because your comment is just on the edge of being reasonable.


> but no one cares that "Watson doesn't know it won".

I care, especially that Watson doesn't know when it lost or learn from its mistakes, because it doesn't actually understand anything. Watson's fragile intelligence relies on the programmers tweaking its algorithms for every particular subject. It's a glorified search engine, as opposed to a true advance in general AI.


I doubt parent meant that literally. It was probably more a catchy way of saying "we don't care what happens in the Chinese room". Of course IBM Watson, specifically, doesn't exhibit human level intelligence.


Yes, the classic definition of "general AI" is "whatever computers can't do". That is definition is thoroughly out of fashion now.


Putting aside all philosophical questions, I do want to dispute that we're wasting money on AI. Even if you believe human-level artificial intelligence is not achievable with computers, you can't deny that AI so far has already paid huge dividends, and will continue to pay off even more in the future. Examples abound, such as self-driving cars, medical diagnosis, and automating more and more forms of busy-work that humans currently have to do. So I don't really see how it's a waste at all.


It is obviously not a waste when we use computers to do syntax. Chess-playing computers can now beat the best international grandmasters; this is an example of computers doing syntax.

For every task which is syntax only, it is a fantastic idea to make computers do the work, and relieve humans from the drudgery. I'm all in favor of that.

But for any task which requires semantics -- real human intelligence -- it is foolish to attempt to replace humans. It cannot be done.

What is required is the wisdom to know the difference between the two.


> But for any task which requires semantics -- real human intelligence -- it is foolish to attempt to replace humans. It cannot be done.

Well we don't know this for sure, do we? "It's an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, ... it is an open empirical question whether any such processes are involved in the working of the human brain." [1]

[1] http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#Ph...


If you accept materialism as true, then by the Church-Turing hypothesis it is a necessary and foregone conclusion that computers will achieve full human intelligence, and even more, because they do not get tired and are not distracted.

But the hypothesis of materialism is what is in question here, both by my citation of the difference between syntax and semantics, and the obvious (to anybody who understands what a computer is and does) conclusion that computers do only syntax, while humans clearly do semantics also, and also by Searle's and Nagle's work.

I think that the evidence (which most people want to deny) is very clear that materialism is false. Most people deal with this evidence by ignoring it, or by denying it exists in the first place. They never address it.

You can prove me wrong. You can prove that materialism is correct. Just produce a real AI which is every bit as intelligent and capable as a human. Produce an AI which can really do semantics. Produce an AI which clearly convinces everybody that it is really intelligent in the way humans are, without any parlor tricks (like modelling an idiot savant).

It is much harder to prove that materialism is false, but that is what Nagle has done in his recent book. Have you read it? If he has not convinced you, please critique his arguments.


> and the obvious (to anybody who understands what a computer is and does) conclusion that computers do only syntax, while humans clearly do semantics also,

You keep claiming this is "obvious". But it can only be "obvious" if you first accept that the materialistic hypothesis is false or that there is some reasonable definition of "computer" in a purely materialistic universe that does not include a brain and there is no possible alternative structure that can meet a reasonable definition of "doing semantics".

To me, there's no reasonable way of claiming that the answer to this is "obvious". Firstly, the materialistic hypothesis is my default assumption in the absence of any evidence whatsoever that it does not hold, secondly, in the absence of evidence against the materialistic hypothesis, it is my default assumption that the brain is a computer.

Thirdly, while I accept that we could define a category "sentient brains" and intentionally exclude it from the category of "computers" for the sake of argument, while I concede that using such definitions it might be possible that there is no alternative means of physically structuring computers that could give the same outcome as the structure of a brain, even then I don't see any justification for why it would be obvious.

Your arguments in this thread, when not circular, rests on a whole cloud of hand-waving away controversial issues behind claims of "obviousness".


If humans and other animals have some nonmaterialist magic sauce in us that enables us to actually think when nothing else in the universe can do that, then a few questions follow quickly from there:

1) Why have we never observed the magic sauce directly in an experiment?

2) Why does the magic sauce only ever explain the otherwise-not-yet-explained instead of making novel predictions? How can the magic sauce fit in with the "AI Effect", in which AI detractors continually move the goalposts for "intelligence" the instant an algorithm can solve any particular problem intelligently?

3) In a related matter to the "AI Effect", how can we use the magic sauce to take over the world and kill all humans? Since this is the current standard for well and truly forever defeating the "AI Effect" and getting detractors to admit (possibly from beyond the grave) that your software really was intelligent, a magic sauce of human intelligence should be able to accomplish the same goal.

4) How does the magic sauce causally interact with the material world to generate our thoughts and consciousness?

5) Where does the magic sauce come from?

6) How can we make more of the magic sauce?

7) What other nonmaterial, irreducible phenomena does the magic sauce exist alongside, and how does it interact with those other phenomena?

If you really believe in nonmaterialist magic sauce, and aren't just engaging in a "dualism of the gaps" argument, you should be able to at least propose scientific avenues for investigating these seven questions.


I am not an expert in whatever field "materialism" belongs, so I will defer to accepted knowledge (or lack thereof) as professed by layperson-friendly sources such as Wikipedia. Till the time I really get interested in these fine differences.

May I point out that the tone you use in these discussions is not, in general, of a nature which encourages a lay person to even read what you say, much less to follow up on your ideas? People really do not like being looked down upon, and your writing comes across as quite condescending (among other things). Here are a couple of things from the parent comment to illustrate this:

1. "the obvious (to anybody who understands what a computer is and does) conclusion that computers do only syntax, while ..."

Do you see what's wrong with a statement of the form "What I want to prove is obvious to anyone who knows even a little something", and how it might come across as (i) unacceptable hand-waving even in a research paper in the relevant field, and (ii) highly condescending in public discourse?

2. The whole paragraph starting with "You can prove me wrong. ..." comes across as childish. Who am I, anyway, and why should I go to all that trouble to prove you wrong, even if I was somehow capable of doing it? As an analogy, suppose in a discussion on life outside earth I quote current expert consensus as found on Wikipedia as saying that there is potentially life on Europa [1]. And someone retorts saying "No. Prove me wrong. Just build a spaceship which can go bring some life from Europa." Do you see something wrong with such a response? In particular, do you see something wrong with the use of the word "just" here?

3. Your repeated insistence that everyone should read some work and critique it before countering your arguments comes across as obnoxious behaviour.

4. A minor point: it is Nagel, not Nagle. I mention this because I see you making the same mistake in multiple comments.

You seem to have interesting points to make. It would be good if you make them in a way which people find enjoyable to read.

[1] I made this up on the spot. I don't know what the current expert consensus is, so check Wikipedia before taking this as true :).

(Edit: Formatting)


Thank you for correcting me on Nagel's name. I'm embarrassed that I misspelled it, cuz his book is right in front of me.


If a bright-line distinction between syntax and semantics requires a rejection of materialism, to me that is a pretty compelling reason to not draw a distinction between syntax and semantics.

Maybe "most people" ignore the evidence, but I think anigbrowl gave a pretty good response and I'd reiterate his suggestion that Hofstadter and Dennett have given adequate replies to Searle. I would go so far as to say Hofstadter's Godel Escher Bach is the most important popular-audience book ever written about AI.


There is clearly a distinction between the two, and we don't know how to use symbols to represent meaning [1]. But the fact that we don't know how to do so yet obviously does not mean that that is no way, or that our brains are not doing it right now.

[1] http://en.wikipedia.org/wiki/Symbol_grounding


I cannot believe this misguided crap is second comment. Searle has been debunked so thoroughly and so many times, he's like the Kirk Cameron of AI.


To what are you referencing with the word Idiocy? Minsky winning a prize?

  "But the evidence is overwhelming. The evidence cannot be denied."
OK, so provide the irrefutable evidence instead of your opinionated, anecdotal "evidence" that computers cannot associate meaning with a symbol. You make an untestable claim that computers cannot do semantics, perhaps they can and we haven't built the right type of computer. Then you use circular logic to "prove" that a computer can only do syntax. Of course a computer that can only do syntax can only do syntax. The first rule of tautology club is the first rule of tautology club.

I have not read the Nagel book, and I am not saying you are wrong, I am saying that nobody is going to believe you if you don't substantiate your argument with real evidence.

If your "idiocy" claim was against Minsky, I would urge you to read even a small portion of his work, particularly Causal Diversity/the Future of AI [1] which is a short technical paper where he talks about how computers will not be able to advance in many areas until they can understand word meanings (semantics).

[1] http://web.media.mit.edu/~minsky/papers/CausalDiversity.html


What you wish to discuss is philosophical, rather than scientific in nature (for the time being), and there is nothing that can be said in the span of an HN comment that can substantially change anybody's mind, one way or the other.


The Ph.D. degree is all about philosophy, no matter what field it is in.

There is nothing which is scientific (dealing with knowledge in general, according to the Latin roots of the word), which is not also philosophical (dealing with the love of wisdom, according to the Greek roots of the word).

You are correct that I cannot change anybody's mind, one way or the other. I was only hoping to plant seeds.


But there are some things which are philosophical, yet not scientific, which was what I was alluding to. My experience is that these purely philosophical discussions tend to churn over the same issues repetitively without any resolution - because there is no clear way to reach one, unlike there is with science.

In fact, philosophers write whole books addressing each others arguments without much in the way of changing each others' minds. This highly disincentivizes me from attempting to discuss them, at least in a forum like this one.


> But there are some things which are philosophical, yet not scientific

I think you are wrong here as well.

There is an entire branch of philosophy dedicated to the study of what we (can) know, and how we know it: epistemology.

Some philosophers get things wrong, just like some people get their mathematical sums wrong, and some scientists follow incorrect hypotheses for many years. But that is not a valid excuse for neglecting to search for the Truth, or failing to arrive at Truth.

Indeed, the entire aim of all of science is to arrive at Truth.

> because there is no clear way to reach one

The laws of logic are very clear, even though most people do not follow them most of the time. You cannot even think properly without obeying the laws of logic. http://En.Wikipedia.org/wiki/Law_of_thought (And sadly, indeed, many people do not think properly.)


Epistemology itself is an example of a branch of philosophy that is basically unscientific. That doesn't mean it's of no worth, or anything of the sort. It just means that it makes no falsifiable predictions. This is typical of philosophy - things that are scientific end up outside the purview of philosophical argument, as a scientist can just test it at that point. All that remains in philosophy are thus the unanswered, and possibly unanswerable questions.

> The laws of logic are very clear

Clear logical laws do not imply that truth can always be reached by application of it. Differing (and untestable) premises are also problematic: see all the opposing schools of thought in various branches of philosophy.


"Now what is the message there? The message is that there are no "knowns." There are things we know that we know. There are known unknowns. That is to say there are things that we now know we don't know. But there are also unknown unknowns. There are things we do not know we don't know." -Donald Rumsfeld

Some philosophers of epistemology get things so terribly wrong that they follow incorrect hypotheses for so many years and actually take the country to war based on fabricated intelligence that they knew was in fact provably wrong, which they manufactured for purely political reasons to justify their hidden agenda, yet they still won't admit that they were wrong.


The Truth you aim to arrive at, is the Truth you started from on your quest for the Truth. There is no escaping the Strange Loop.

Philosophy, Arts and Science create Truths to ride on to new places, like Silver Surfer on his surfboard. Finding the Ultimate Truth itself, I imagine, would be what some call Enlightenment.

The paradoxical wisdom taught in many of the ancient classics leads me to wonder if submission to the nature of Paradox really is at odds with the scientific method and desire for Truth, or if they are just two forms (passive and active) of reaching the same understanding.


> I was only hoping to plant seeds.

Considering you come off sounding like a quack, you're not the type to do any seed planting. You argue fallaciously with argument from authority, and then reference authorities that most here clearly consider laughable and debunked, and you want us to take you seriously. Really?


There is no such thing as semantics (in the sense that you mean.) You are basing your entire argument on a bad intuition. You literally just made up a definition of "semantics" that excludes computers, and then Argue 'By Definition' (http://lesswrong.com/lw/nz/arguing_by_definition/).


SimAntics is a visual artificial intelligence programming language for scripting the behavior of irrational simulated people and intelligent inanimate objects, developed for The Sims at Maxis. ;)

(That's lower case game industry artificial intelligence, not Upper Case Marvin Minsky Artificial Intelligence.)

http://wiki.niotso.org/SimAntics

http://simswiki.info/SimAntics

http://modthesims.info/t/111469

http://niotso.org/2012/12/23/edith-cracked/

http://www.qrg.cs.northwestern.edu/papers/Files/Programming_...

http://donhopkins.com/home/movies/TheSimsPieMenus.mov


Searle's whole Chinese Room argument is based on the notion that a Turing-test-passable system for processing Chinese input and responding with syntactically correct Chinese output can be engineered, but the man inside the room won't know what he's doing. This is flawed in two ways.

First, it assumes the man in the room can't learn anything about the system he's manipulating and eventually draw inferences about its grammar, which I think is bogus. Now true, you could sit in the box for a long time and not learn how to speak Chinese because you haven't learned the sounds that are associated with various Chinese characters. But eventually, after sufficient practice, you'd be able to read and write Chinese effectively, and I argue that you'd be able to derive the semantics by inference. I am arguing this on general principle, though having spent some time studying Chinese from books but not speaking much of it, I'm going to throw in 2 cents worth of empirical experience as well.

The other objection is a deeper one, made via a reductio ad absurdum; if a person can't learn Chinese this way, does an English speaker really understand English? Sure, s/he has all the appearance of comprehension and can conduct a conversation in person or via the written word, but how do we know the person isn't just mindlessly manipulating a set of rules that has been internalized since youth? Indeed, given the lack of critical thinking some people exhibit, there might even be some truth to this! but this is the heart of the problem - there's nothing about the Chinese Room argument that can't be restated as an English Room argument and used to deny the sentience of a native English speaker. And now we've come right back to Cartesian arguments about whether there is some particular seat of consciousness within the brain, some part that is more vital than others and which comprises the brain's 'driver's seat' - whether that's the pineal gland (Descartes), the corpus callosum, the anterior hippocampal gyrus (Jaynes) or what-have-you.

Putting this in the context of thermostatic beliefs, I see Searle's point about the thermostat not really believing anything...but then if I believe 'it is too hot/cold in here' am I having a real belief or is this just a convenient abstraction of my aggregate levels of cellular ATP and physical work levels to keep my body functioning, making my brain little more than the thermostat for my organs which is where 'the real action' of consciousness is taking place.

Essentially, I'm abstracting Doug Hofstadter's elaborate refutation of Searle on Godel Escher Bach; I'm with Hofstadter and Dennett in being a materialist proponent of strong AI, and think Searle's argument is isomorphic to the 'god of the gaps' argument made by intelligent design proponents.

I haven't read Nagel's new book yet, and I'll give it a whirl, but since these arguments are essentially philosophical rather than empirical I don't anticipate any sudden conversions. In turn, I think you ought to try Julian Jaynes' Origin of Consciousness in the Brakdown of the Bicameral Mind.


> I think you ought to try Julian Jaynes' Origin of Consciousness in the Brakdown of the Bicameral Mind.

Thanks. I'll buy it and read it.


It's not responsive to AI as such, but it does offer a sufficiently different model of consciousness that it might meet your Kuhnian threshold. I'm not sure that it's right, but it's so elegant that I feel it ought to be.


Eh, this is ungracious. Of course computationalism, eliminative materialism and so on are bosh. But AI research has has produced many practical advances and will produce more. And not just AI advances: look what JMC and the AI labs did for programming languages, algs & DS, and operating systems. Hard to grudge the lads some recognition for that.


Henry Minsky, Marvin's son, works at Nest on the "Thermogotchi", a digital pet that lives on your wall, that's a sensor-driven, Wi-Fi-enabled, self-learning, programmable thermostat which you train to adapt to your habits and lifestyle, and keep happy by feeding love and energy.

http://www.beartronics.com/

Google liked the idea so much that they acquired the company that developed it. So if that isn't a practical application of AI, I don't know what is.


On the thermostat, I get the impression McCarthy was being a bit obtuse just to goad Searle a little. What he was illustrating is that simple systems can exhibit apparent intentionality while remaining simple. A steam governor or a thermostat are great examples. Living creatures, like the Aplysia californica snail studied by Eric Kandel, make use of analogously pretty damn simple mechanisms for reasonably sophisticated behaviour. Following this line of research, we've created remarkably effective systems like the 'syntax-only' Google Translate system, and IBM's Watson.

The questions posed by Nagel and Searle are interesting ones. Qualia is really difficult to account for - it makes sense for a materialistic system to have 'distinct placeholders' for different experiences, but why should they 'feel like' anything? Precisely because they propose a non-materialistic solution, they're difficult ones to explore - and so it's not really surprising that their own attempt to illustrate their perspective haven't been that convincing to materialists. The Chinese Room thought experiment is only convincing if you start out assuming the only thing that can 'think' is the person doing the card shuffling, which begs the question. See Hofstadter's Metamagical Themas for a good exploration of the argument.

If you're interested in sophisticated explanations of conscious experience and semantics from a materialist's perspective, Hofstadter and Sander's 'Surfaces and Essences' and Minsky's 'Society of Mind' and 'Emotion Machine' are great. Dennett's 'Intentional Stance' is also relevant - discussing how we can sensibly talk about material systems as having intention. Bear in mind that dualism is the more 'natural thought' in historical terms, and even hardcore materialists lapse into dualist terms easily. Researchers are rightly suspicious of unexamined assumptions. When it comes to science of the mind we have a history of novel, unnatural thoughts that are also very flawed (like Behaviourism), but critically, materialism (and Behaviourism) have both been productive.

One thing you can comfortably say about the progress of science is that at any point we're going to be partly wrong. Materialists are actively investigating and testing ideas, which meets my definition of 'real science'. Even if researchers are wrong on an important point, it doesn't make their work worthless, so you should probably hold back on the character assassination. Brain injury's effect on subjective experience and behaviour suggests to me that pure mechanism is important to experience, which is why materialism is my 'default hypothesis'.

If you want to ask hard questions, then go back and look at your assertions. What does 'really doing' semantics entail? How can the 'void' between semantic and syntactic systems be accounted for? Or, if we can't currently explain it, what can we do to investigate it?


I am not agreeing with your argument, but let me play devil's advocate.

If somebody wanted to produce a real AI, every bit as intelligent as a human, he probably wouldn't go wrong by trying to reproduce in software what Pascal Boyer describes in his book "Religion Explained". http://www.Amazon.com/Religion-Explained-Evolutionary-Origin...

Boyer fails, in my mind, in many ways, but at the very least by not providing a solution to the Frame Problem. The Frame Problem looms very large in all of Boyer's descriptions of folk psychology, and how the human mind works. Yet it is completely unaddressed.

For a good explanation of the Frame Problem, see Daniel Dennett's argument "Cognitive Wheels: The Frame Problem of AI" in chapter 7 of "The Philosophy of Artificial Intelligence". http://www.Amazon.com/Philosophy-Artificial-Intelligence-Oxf... In this article, Dennett is his own worst enemy, in that he proves that AI isn't possible. (Nobody has solved the Frame Problem yet.)


> My thermostat has three possible beliefs: 1) It is too warm in here. 2) It is too cold in here. 3) It is just about right.

Translated to the current scientific fashion:

My brain has three possible beliefs: 1) It is too warm in here. 2) It is too cold in here. 3) It is just about right


Don't just vote me down. Give reasons. Critique my arguments. Critique Searle and Nagle, whom I cite. (You have to have read their corpus first.)

Voting my comment down without giving any reason or justification proves my original point that nobody can stomach the actual evidence that I have raised, and which Searle and Nagle have also raised.

You are not yet ready for the Kuhnian revolution in AI which must eventually take place, and you will be on the wrong side of it.


I didn't downvote you (I am not much of a downvoter), but frankly, your comment (the first one) is exactly the sort of thing that can be legitimately downvoted as "not contributing to the discussion". You launched into a vehement philosophical critique that can't possibly be hashed out in a comment thread. Do you want people to start posting 20-page book reviews right here and now? And nothing you said is particulary earth-shatteringly new to anyone. It's just that none of us - not me, certainly - is going to solve or prove unsolvable the secret of consciousness in a HN comment.


Eh, it was rude but nonetheless relevant IMO, even though I disagree with the GP.


You're the outsider opinion here and must present strong evidence. There is no good evidence for dualism, everything we've learned about the brain is completely consistent with materialism.

You must build a stronger case than Searle and Nagel which have been previously thoroughly debunked.

And to your overall point: To claim that we're wasting money on AI in a world we'll have self-driving cars in a few years is ridiculous. I'm surprised your main comment still seems to have a positive number of votes.


if i understand you correctly, you seem to believe that qualia definitely affects physical reality. we don't know whether that's true or whether qualia is just a byproduct of physical reality, and given the probabilistic nature of what we can observe physically, we may never know.


It seems clear to me that qualia must affect physical reality. If it didn't, we wouldn't be talking about it. If it was just a byproduct with causality going only in one direction, then we'd never talk about it, because the behavior of a physical system with that byproduct would be identical to that same physical system without it. There has to be a causal chain from this discussion back to qualia, unless the discussion happened by coincidence, which is extremely unlikely.

I don't think this tells us anything about what qualia is or whether it is or isn't a material process, but I don't think it's tenable to say that it either doesn't exist entirely, or exists but doesn't affect anything.


i'll admit that that's a convincing argument, but there are ways in which apparent causality can be shown to be illusory. one thought experiment i remember from school involves someone watching a movie in which one character punches another character, and the punched character falls backward. suppose the watcher knows absolutely nothing about how movies are recorded, or re-played, and sees only the lifelike images; then there would be clear, apparent causality of the second character falling over as an effect of being punched by the first. in reality, the only causality is that set up by the mechanics of the movie projector.


Well, it's a probabilistic argument. While it's possible that it's just a coincidence that we both have qualia and discuss qualia, it's an extremely unlikely coincidence.

To twist your movie analogy beyond all use, it's like trying on some clothes in a dressing room, then watching a movie with a scene that features you trying on those exact same clothes in that exact same dressing room in the exact same way you tried them on. It's possible that the filmmaker just happened to capture the exact same scene by chance, but it's vastly more likely that he was secretly recording you.


i would argue that we can't accurately infer any likelihood without knowing what the entire probability space is. in the dressing-room example, we are assuming there are not many, many dressing rooms that look similar, and many, many people that look just like us, and many different sets of the same clothes. but that is just an assumption. we have no idea about the probability space of different material universes and how qualia is embedded in them.


I think that "qualia" is the wrong question. Please see my other comments on this story.


i do not believe in materialsm, but i do believe that intention and goals are emergent phenomena. nevertheless, i think that's mostly a separate issue.

it is a false equivalency to say that materialism is false and that it is impossible to build strong AI. if the material world alone can be described by laws of causality, then we should expect to be able to simulate some significant part of it. if it cannot, and if extra-material forces impact the material world, then we may have no hope. but existence of extra-material stuff (which i call "qualia") may or may not assert a force that affects the material world.


Don't compilers and type checkers do rudimentary semantics ?


If I understand yc-kjh's point, their "semantics" are really just more complicated syntactic rules. No real understanding occurs within the code. Or, the code doesn't understand what it's doing it's just doing what it's been instructed to do by us. (On that note, the code doesn't understand anything because the code is a non-entity, it's a thing, like a book or a car engine. A car engine doesn't understand the fuel it's burning or the gears it's turning or why, it just does it because that's what it does.)


>If I understand yc-kjh's point, their "semantics" are really just more complicated syntactic rules. No real understanding occurs within the code.

Excuse me, but did you actually intend to express that anything which can be encoded in second-order logic is just syntax and has no semantic meaning? Because a reasonably sophisticated type system can in fact express arbitrary propositions in second-order logic; the resulting compiler might have undecidable type inference or checking, but it will in fact be following second-order logic.


No. I actually disagree with the point. I was trying to provide an interpretation of what had been written by the OP. And based off their response, my interpretation matched their intended meaning.

EDIT: I wrote this when I wasn't fully awake. I agree with the OP slightly, but that's because the OP is using "semantics" in a different way than how computer scientists (like myself) would use it. OP's definition of syntax is, basically, "rules". OP's definition of semantics is "understanding", not "meaning".

So by that definition, computers really only seem to be following more and more complex rules. OP seems to be of the opinion that the "semantics" side relies on something outside of encodable rules. In that regard, a computer doesn't understand its program anymore than a car engine understands its pistons. That doesn't mean that we can't encode meaning into our programs (what computer scientists intend with the word semantics). It also doesn't mean that I think OPs definition of semantics (understanding) is fundamentally impossible for computers (see my other post in this thread), just that, as far as I've seen and understand, it hasn't happened yet.


Yes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: