Hacker News new | past | comments | ask | show | jobs | submit login
Why Philosophers Should Care About Computational Complexity [pdf] (scottaaronson.com)
160 points by fbrusch on Feb 17, 2015 | hide | past | favorite | 43 comments



For me, the key insight came from the following quote regarding Searle’s Chinese Room argument [1]:

*> And while it was true, the critics went on, that a giant lookup table wouldn’t “truly understand” its responses, that point is also irrelevant. For the giant lookup table is a philosophical fiction anyway: something that can’t even fit in the observable universe! If we instead imagine a compact, efficient computer program passing the Turing Test, then the situation changes drastically. For now, in order to explain how the program can be so compact and efficient, we’ll need to posit that the program includes representations of abstract concepts, capacities for learning and reasoning, and all sorts of other internal furniture that we would expect to find in a mind. Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance.

This captures what I had intuitively thought regarding the link between computation and consciousness: consciousness is some sort of sophisticated, elegant computation embodied in human physiology.

[1]: https://en.wikipedia.org/wiki/Chinese_room


>Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance.

It's not so much the difference between polynomial and exponential complexity. It's the difference between a coinductive (ie: infinitely looping, like a game loop) computation that adds new information to its "memory" in each iteration (which sounds monadic), compressing its inputs to accommodate that information, and then generating an output... vs a super-exponential lookup table that performs the input-output mappings without actually engaging in any side-effecting or monadic computation at all.

So yes, when you think of these two vastly different ways of writing down a "cognitive-style" computation, it becomes clear that one of them maps relatively well to what we think of when we say "mind", while the other doesn't, and Searle is conveniently eliding the distinction.


Hm why does adding information to memory sound monadic to you? I am not questioning you, but just curious. My knowledge of monads is limited.

I would argue that Searle's Chinese Room is posed in a way that presupposes what forms computation might take or the limits it has. Since this was posed decades ago (1980), it makes sense in his time. If Searle had begun teaching today (instead of 1969) and seen convolutional neural networks and recurrent neural networks deployed in massive data centers, Watson win Jeopardy!, neural Turing machines, or a worm's brain emulated and embodied in a small robot, I doubt he would have posed his Chinese Room the same way. Maybe he would not have posed it at all, being convinced in light of those achievements.


This is the first good refutation of the Chinese room I've encountered. This was never mentioned in the "Minds, Brains, Machines" course I took, so I don't think it's talked about as much as it should be among philosophy of mind circles?


>This is the first good refutation of the Chinese room I've encountered.

Really? I'd think that the fact that someone saying 'lookup tables aren't thinking' without ever saying what thinking is would be presenting an obviously self-refuting argument.

Like a judge saying "I know crime when I see it: I don't have to write down what it is." It's simply not a basis for rational decision making, so it's certainly not a convincing assertion.

The fact that anybody thinks the Chinese room is an interesting philosophical quandary is far more interesting. It means these are primitive times and you don't actually need to be very smart to beat the competition.


>The fact that anybody thinks the Chinese room is an interesting philosophical quandary is far more interesting. It means these are primitive times and you don't actually need to be very smart to beat the competition.

There are genuinely competent people doing competent work in the field of philosophy. They just don't make up enough of the peer-review boards to throw out everyone who isn't as competent as them.

The continuing influence of mathematical logic and its attendant "general axiom to specific theorem, As Above So Below" style of thinking does not help.


The whole Turing Test thing is massively overblown. It's a thought exercise and not a particularly great one. It's gotten completely out of hand, with Kurzweil, Chomsky, Searle etc all weighing in.

Computer passes Turing Test, Human fails Turing test - vice versa - that's all there is to it.


Yeah well, context is king. The aim of the Turing Test is to say that judging machines based on their internal structures isn't very fair, since that's not how we judge humans. There's no real way of knowing about other people's "qualia" or whatever you want to call it. To skirt that problem, Turing suggested a behavior-based approach.

I think we probably judge others to be conscious actors in more ways than this though. We probably accept that other people have "qualia" from the fact that we have it, and then we kind of learn that our parents have it, and so on, until it's not a huge leap to generalize that to all of humanity. Of course it's not like people have always assumed the same kind of "qualia" for all of humanity, in a lot of conflicts this has probably been thrown out of the window so it will be easier to kill your enemies. Either way, it's probably quite complex how we determine that other people are conscious, Turing tried to skirt the metaphysical issue (that we can never really know, this goes for both humans and other machines). I think this was a huge step in the right direction.


"Computing machinery and intelligence" wherein the test is outlined, with background. aka "The Imitation Game"

http://cogprints.org/499/1/turing.HTML

> The aim of the Turing Test is to say that judging machines based on...

That wasn't the aim at all, again I think you are reading way too much in to it. The aim was to pose a test that would fool people.

Alan Turing:

"an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes"

"The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

The test is a very simple thought exercise where he shows a machine will be able to pass for a human, and a human will fail to identify a machine. That's it. No questions of consciousness, metaphysics! In fact he dismisses such questions in the paper.

His belief was that in 50 years time, the words may have changed, and people's opinions about what constitutes thinking would have changed to the point where: someone could say "a computer thinks" without challenge. That hasn't happened, when somebody says "a computer thinks" they are saying it with tongue firmly in cheek. Maybe in another 65 years time his belief will come true.


You're essentially saying what I was saying. That Turing tried to throw metaphysics out of the window in favor of a practical approach (behavior-based = fooling a human).

As for public perception of consciousness or "thinking", time estimations are usually quite sketchy.


Nope. We disagree fundamentally on the aim of the Turing test. Sure we agree things were thrown out the window, we agree it's a practical test. You think its aim is a fair way for judging machines to be conscious.

I think it's a thought experiment to show that machines are going to be able to fool an average human for some small length of time eventually. That's a lot less grandiose.

A mistaken interaction with an IRC bot or an automated phone service has the same net effect.


Hm? "I think this was a huge step in the right direction." I guess what I am arguing is, that if there's any fair test, it has to be practical and not metaphysical, that's why I think the Turing Test is a step in the right direction. Do I make the claim that the Turing Test as-is is fair when it comes to determining "consciousness" by any reasonable and practical manner? No.

If I am reading what you are saying correctly, I surmise that you're saying that the Turing Test doesn't say anything about consciousness, but rather, is just about the probability of a machine being able to fool a human. I am saying that the reason why Turing even ponders the question is that there's no good way of definitely answering if a machine (or another human for that matter) is conscious or not. So that leaves the ability of being able to fool (or convince) other thinking machines that one is intelligent (or conscious) as the only viable metric.

I feel like most of this is mainly semantics. I too, don't believe that one can actually determine the consciousness of anyone but oneself. However, we do convince ourselves that other humans are in fact conscious, so it's still interesting to try to figure out what it would take for us to do that, and then apply the same standards to machines.


I love the Turing Test. Not as a test of intelligence - that's boring and it's not entirely clear what it means anyway - but as a subversive test of personhood, which is what most people are really interested in anyway.

Personhood rights are ethical and granted by society as a whole. A well administered Turing Test gives the computer intelligence a chance to argue for treatment as a full person, without the prejudices that would normally be attendant on the fact that it's instantiated in electronics.

Even if you started completely against machines having rights, once you've allowed one to convince you that it's human-equivalent, you've much less room to rationalize away poor treatment of it.


See my response here, you are also reading way too far in to it:

https://news.ycombinator.com/item?id=9085587

It doesn't give a computer any chance to argue for treatment, it's about whether or not an average interrogator will be fooled by it.

Sure machine rights are an interesting thought - but the Turing Test as outlined in the "Computing Machinery and Intelligence" paper isn't related to machine rights. Anytime a computer says "no" without good reason to "sudo shutdown now", the plug will be pulled on it.

I agree with you that it isn't clear that the Turing Test has any meaning whatsoever. I'm not sure there ARE great implications from machines eventually being able to fool humans. Plenty of things can fool humans, paintings, mirrors, animals. And not all humans are equal in determining whether they are interacting with another human or not.


I'm sure you're right about what Turing intended the test be used for, but I think the question of whether a machine can fool a person into treating it like another person absolutely has relevance to the question of machine rights.

And unlike 'intelligence', there are concrete steps to be taken given some answers to the question as to what rights a particular program should have.


> can fool a person into treating it like another person

"perceiving something as something" and "treating something as something", different verbs. Turing tests aren't about treatment, they are about perception.

The field is wide open for thought tests or games on how to "treat" computers so I would say knock yourself out and make a better one.

The Turing test doesn't do a whole lot of good at answering those questions or for that matter any questions. Other than "will a computer be able to fool a human eventually".

The notion of machine rights existing at all is evidence that Turing's prediction that the way intelligence is thought of would change, but still most people would challenge the idea that computers think.

Maybe in another 65 years that will change further - and machine rights will no longer be a fringe idea.


> "perceiving something as something" and "treating something as something", different verbs.

Well yes, but if you perceive something as something that you will treat it as if it were that thing. And that is exactly what would happen during a successful Turing test - the computer would be treated as if it were a human by the judge.


your position is trivially falsifiable - mistakenly interact with an irc bot or an automated telephone service as if it were human, then on realisation - you treat it like a machine again.


"..representations of abstract concepts, capacities for learning and reasoning... consciousness is some sort of sophisticated, elegant computation ..." -- But would it love?


This is my favorite Scott Aaronson piece! It has been submitted to HN before, though quite some time ago. The discussion on that one is worth referencing here as well:

https://news.ycombinator.com/item?id=2861825


It is like saying "Philosophers should care about biology or about some other domain X". If you look at the history of "Philosophy of sciences", most of them came from the object-level domains. In other words, many philosophers of sciences came trained from domains such as physics, etc.

Pierre Duhem was a physicist. But he also contributed to philsophy of scineces: check the famoous Duhem-Quine thesis. Same thing happend to "philosophy of biology". If you look at the journal "philosphy of biology", most of the early contributors came from biology.

I think, Scott Aaronson is aware of the controversy between Oded Goldreich and Neal Koblitz. This controversy could have been formulated better if people in theoretical computer scince had mastered "philosophy of sciences.

People in Artificial Intelligence (esp of CS variety) make wild claims like AI annihilating the human race. And these voices have strong backing from people like Bill Gates. However, these voices don't look at what philosophers have writtern about AI, and the kind of nonsense AI folks spew. Yes, AI can solve particular tasks (a chess game, autonomous driving): but can it replace humans? Yes, say the AI group; No, say the philosopher Hubert Dreyfus. Check the latter's book: What computers can't do, a critique of artificial reason.


I love when computing mixes with philosophy, despite being unable to grasp most of it.

Take for instance Rich Hickey's Are We There Yet?[1], the Alfred North Whitehead quotes are great food for thought: you dive into it just as much you are able to swim.

[1]: http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...


As someone with a major in philosophy, I can attest that it is in fact something we talked about often.


How deep do you go? Do you talk about P and NP? Or deeper, like interactive proofs or complexity-based definitions of randomness?


Philosophy generally has its own language for talking about these kinds of problems, and generally its own starting points. Gilles Deleuze laid a lot of the groundwork for using ideas from chaos theory to work on philosophical problems during his life (starting with some of his work in the 1960-70s). Some links:

- http://www.protevi.com/john/DG/PDF/Remarks_on_Complexity_The...

- http://www.euppublishing.com/doi/abs/10.3366/jipt.2010.0004

It could perhaps be useful to do similar work with computational complexity theory.


What is being discussed in those links is commonly called complex systems theory[0] which is a mathematical/scientific/philosophical field concerned with how complexity (in the natural language sense of the world) arises from what appear to be simple natural laws. For example, how chaotic dynamics arise from deterministic systems, how biology arises from chemistry, or how consciousness arises from neural activity. It is connected to chaos theory, nonlinear dynamics and systems theory.

It is completely different to computational complexity[1] which is what is being discussed in Aaronson's article.

[0] http://en.wikipedia.org/wiki/Complex_system

[1] http://en.wikipedia.org/wiki/Computational_complexity_theory


My mistake, I meant it as a basis for incorporating computational complexity. The original article posted shows how computational complexity is applicable to non-linear systems, which I think are philosophically more interesting. Deleuze's work would provide some good groundwork for seeing how those kinds of ideas can be applied in a philosophical context.


The difficulty is that if you get too deep into this stuff you stop doing philosophy and start doing something useful. Philosophers have to be very careful these days or they'll get dragged out of the tiny circle of darkness they insist on occupying.

There is a place for philosophy, but it shrinks every time a new discovery is made. It hasn't been quite as roughly treated as theology has, but the lag is only about a century. The vast majority of what philosophers once spent their time talking about is now part of some empirically grounded special science, where making stuff up and asserting broad foundational claims without proof or testing is no longer viable.

Linguistics, in particular, has carved off large chunks. Physics likewise: no fun speculating on the nature of space and time when we can pin them down pretty precisely. Various combinations of sociology, psychology, economics and political science have heavily encroached on moral philosophy.

This is not to say that the world can't always use a few philosophers to poke around in the darkness and perhaps cross the boundaries of other fields to good effect, but the majority of the work that was once their exclusive preserve is now almost always better handled by people with actual, detailed, empirical knowledge of the subject.

There is no value in privileging the human scale or human imagination, and fortunately no need to live within the artificial restrictions that philosophers place on themselves, particularly their refusal to do experiments or even make systematic observations to test their ideas, preferring instead to rely on "what just makes sense", because that has always worked so well in the past. Insisting on empirical testing is in no way limiting because ideas that cannot be tested are irrelevant, as they must not have any impact whatsoever on any aspect of existence (if they did, observing and/or manipulating that aspect of existence would allow the ideas related to them to be tested.)

Philosophy does teach a certain kind of rigour in thinking, because without either empiricism or mathematical deduction to fall back on philosophers have to be extremely meticulous to avoid falling off the real axis entirely, but that starts to look more like a skill that ought to be taught to everyone rather than the basis for an entire academic discipline.


This is wrong. It is also a terrible misunderstanding of philosophy. It is completely possible to approach a problem from a "useful" perspective and a philosophical perspective simultaneously and many people do this. I had professors with a Phd in both Philosophy and in Physics because each academic discipline deals with similar problems independently.

Philosophy does not deal with testing empirical theories. No there has not been encroachment by linguistics, sociology, psychology, or economics.

I'm sorry you wasted so much time typing this comment, I've read it three times now and I cant find a single thing you got right.


> There is a place for philosophy, but it shrinks every time a new discovery is made.

How do you define philosophy? can't we see science as as subset of philosophy?


Philosophy of science is something you might be familiar with. It takes a tangential approach to science and it is highly frowned upon in the philosophy community to make judgements about science directly. Instead the idea is to reason about the scientific process intead of trying actively to define it. Let the scientists do their job, then as a philosopher you try to construct an intellectual framework to make sense of what they are doing. The realism-antirealism debate is an example of that today. People argue about whether the science being done at CERN, and the models they produce to explain the observed phenomena, are explaining the "real" world as it is, or if they are simply convenient mathematical theories that attempt to explain empirical phenomena. There is no debate over wether or not they are doing valid science, the debate is over what the process they engage in and the conclusions they draw imply for our understanding of reality, our experience, and what it means to attain knowledge. The people debating these issues are by no means removed from the scientific community either. Most professors who write these papers either are physicists themselves or work closely alongside them and have intimate understandings of the science and math before they start to write. The papers are themselves targeted at other physicists and philosophers with understandings of these topics far more advanced than my own.

I find it unfortunate when people like the commentator above make the claims that they do because it detracts from the respect that the field of philosophy deserves. It may not have the most obvious practical applications but it deserves attention and some of it is really mind-bending stuff. When people like Steven Hawking shit on the subject it's really a shame. They are causing themselves to miss out and they are driving potential great thinkers away.


Can someone explain to me the zero-knowledge proof example problem discussed on page 37, given two graphs G and H, prove that they are NOT isomorphic?

"But as noticed by Goldreich, Micali, and Wigderson [65], there is something Merlin can do instead: he can let Arthur challenge him. Merlin can say:"

Arthur, send me a new graph K, which you obtained either by randomly permuting the vertices of G, or randomly permuting the vertices of H. Then I guarantee that I will tell you, without fail, whether K ∼= G or K ∼= H.

I don't understand what it means for a graph to be isomorphic to another graph, and I do not understand how Merlin's challenging answer can provide a (zero-knowledge) proof to the given problem either.


An isomorphism between two graphs indicates that bijection exists between two graphs, or that there exists a method to permute one graph's vertices so that they are equal to another one.

What Merlin is saying to Arthur is that no matter what kind of graph K Arthur comes up with, it cannot be congruent to both G and H, or, that there exists no method to create a graph that is both G and H.

Lastly it is zero-knowledge because Arthur learns nothing of the relation between G and H, only that the K he computes is not both of them simultaneously.

Not sure if that helps.


A graph G is isomorphic to H if they are the same up to permuting the vertices. If you have a graph 1 -- 2 -- 3 (a path on 3 vertices, with 2 edges), it's structurally the same as 3 -- 1 -- 2, only the labels of the vertices are different, and that doesn't matter for a graph theorist. So we say the graphs are isomorphic.

I had a hard time understanding what it means when I was in undergraduate school, because it seems so trivial -- why can't I say that the graphs are simply the same, like 1 = 1 or {0} = {0}? You could say they are the same, but the interesting point is that it is not easy to decide whether they are the same if the two graphs G and H are big. Therefore, "graph isomorphism" is a computational problem in its own right.


Scott just gave a fantastic Nautil.us interview! Just placing here for completions sake: HN link [1] and direct link [2]

[1]: https://news.ycombinator.com/item?id=9074033 [2]: http://nautil.us/issue/21/information/ingenious-scott-aarons...


Incidentally I'm reading Sipser's book for a philosophy course right now.


> The purpose of this essay was to illustrate how philosophy could be enriched by taking compu- tational complexity theory into account, much as it was enriched almost a century ago by taking computability theory into account. In particular, I argued that computational complexity provides new insights into the explanatory content of Darwinism, the nature of mathematical knowledge and proof, computationalism, syntax versus semantics, the problem of logical omniscience, debates sur- rounding the Turing Test and Chinese Room, the problem of induction, the foundations of quantum mechanics, closed timelike curves, and economic rationality.

O.M.G. Yet another natural scientist trying to coerce philosophy to his 'way of thinking'.


Aaronson studies theoretical computer science, which is a subfield of mathematics. There is no coercion in his article. He is pointing out that while philosophers like to discuss the philosophical implications of computing, they tend not to discuss the more recent developments in the field that (he argues) would aid them in eliminating faulty arguments and clarifying terms like "artificial intelligence."


Well if you tried reading just past the abstract...

> To forestall misunderstandings, let me add a note of humility before going further. This essay will touch on many problems that philosophers have debated for generations, such as strong AI, the problem of induction, the relation between syntax and semantics, and the interpretation of quantum mechanics. In none of these cases will I claim that computational complexity theory “dissolves” the philosophical problem—only that it contributes useful perspectives and insights. I’ll often explicitly mention philosophical puzzles that I think a complexity analysis either leaves untouched or else introduces itself. But even where I don’t do so, one shouldn’t presume that I think there are no such puzzles! Indeed, one of my hopes for this essay is that computer scientists, mathematicians, and other technical people who read it will come away with a better appreciation for the subtlety of some of the problems considered in modern analytic philosophy.

(From some (very) cursory reading, this does seem to hold true.)


How can illustrating "how philosophy could be enriched", or "providing new insights" be considered "coercive"?


He isn't a natural scientist. What he's saying is, "hey, I see you're working on complicated deductive reasoning problems. Here is an adjacent field of people working on deductive reasoning problems that are relevant to what you're thinking about."


Yes, darn it, it's the philosopher's job to coerce people into their way of thinking, what was he thinking?

In all seriousness, I think philosophy has problem, it's disconnect with reality is widening.


Right observation. Wrong target.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: