Hacker News new | past | comments | ask | show | jobs | submit login
Interview with Scott Aaronson (scientificamerican.com)
192 points by robinhouston on April 23, 2016 | hide | past | favorite | 82 comments



In social sciences, there’s an absolutely massive bias in favor of publishing results that confirm current educated opinion, or that deviate from the consensus in ways that will be seen as quirky or interesting rather than cold or cruel or politically tone-deaf.

Isn't that a massive problem for meta-analyses? If there are 15 studies supporting the consensus/politically-correct position and 5 against it, that might be evidence against the consensus, if less than a third of researchers with against-consensus results have dared to publish their results.


You can assess this using funnel plots. If publication bias were not a factor, then you expect effect size to be uncorrelated from sample size. If they are correlated, then it's likely publication bias is having some effect on the results.


Clever solution. Is there a more general principle? I thought that most MNAR problems were hopeless, but maybe not.


It helps that we have a rough model of the publication process that we can correct for. It's kinda like assessing the underlying distribution with only access to the tail of it, which isn't ideal, but can be done.


How does meta-analysis in a highly opinion based field like the social sciences, where there are different, mutually exclusive schools of thought even make sense in the first place?


And how does a lack of even rudimentary knowledge of a specific field of study qualify one to reject it wholesale?

The difficulties in meta-analysis in the social sciences vary from field to field -- psychology is qualitatively different from anthropology or history -- but if we had to generalize, they have more to do with the sheer dimensionality of the problem combined with the difficulty of precise measurement (and sometimes obtaining a large-enough sample), than with the field being "opinion based" or the existence of "mutually exclusive schools of thought" (I don't know how this differs from many other, more tractable research areas, and I don't see how this can be reconciled with saying that there's too much consensus).


I'd say that scientific fields being "opinion based" or forming around "mutually exclusive schools of thought" is a reaction to "the sheer dimensionality of the problem combined with the difficulty of precise measurement".

You see similar social phenomena in other fields with a measurement problem, for example programming language research ...


> You see similar social phenomena in other fields with a measurement problem, for example programming language research ...

Touché! :) (although there's plenty of social science research that is much more empirically rigorous than PL research, and the certainty/evidence ratio is certainly higher in PL)


With all due respect to Aaronson (with whom I find myself agreeing on many things), I wouldn't take his evaluation on the state of social science and his analysis of its core problems as correct. He's not a social scientist, and I don't think he'd appreciate an anthropologist evaluating the state of affairs in quantum computing research.


> He's not a social scientist, and I don't think he'd appreciate an anthropologist evaluating the state of affairs in quantum computing research.

But, from things I have read elsewhere, there does seem to be a reasonably convincing body of evidence that there are systematic biases in social sciences. Some of this perhaps stems from discrimination (e.g., in hiring practices) based on political allegiance. Jonathan Haidt (a social scientist) has spoken out on this topic[0], including specifically on sociology[1]. While that issue likely affects academics in general, political leaning likely has a much smaller influence on one's research in physics and chemistry, compared to one's research in sociology. It's true that Aaronson is not a social scientist, but I don't think that means he is wrong.

[0] http://people.stern.nyu.edu/jhaidt/postpartisan.html

[1] http://www.mindingthecampus.org/2016/02/a-conversation-with-...


> there does seem to be a reasonably convincing body of evidence that there is systematic biases in social sciences.

But Aaronson makes a far more specific assertion than "there is evidence for systematic bias in social sciences".

I agree that there is political bias in social science departments. It is painfully clear to anyone who ever sets foot in one, especially as an outsider (much like the sexism in SV). I think that overall it harms research, perhaps significantly, and it must be addressed. But outsiders sometimes view this (rather obvious) criticism as a complete refutation of the field(s), which it is most certainly not. To their credit, social researchers actively study this bias without need for external pressure, and even acknowledge it's a problem, something which cannot be said for other disciplines (one could argue that it is social researchers' job to study precisely such things, but at least you can see that they're not intellectually dishonest enough to avoid the issue or to dismiss it as inconsequential).

BTW, in general, I think that outsiders usually tend to overestimate internal criticisms (in any discipline). When (internal) critics say "our estimate of the variable x to be 10 is far too low!" they really mean "x is more likely to be 12", while outsiders interpret it as "x is probably 1000".

> political leaning likely has a much smaller influence on one's research in physics and chemistry, compared to one's research in sociology

True, but the correlation between political leaning and social research may very well be a reverse causal relationship: that one's research in sociology is far more likely to affect one's politics than research in physics. If there is an over-representation of a particular ideology in the social sciences (although it's hard for me to put all the social sciences in one category, because they are quite different from one another -- more different than, say, physics and chemistry), it may serve as evidence that some common and politically pertinent ideas arise from the research itself[1] as much as evidence of anything else.

BTW, I've read the interview with Haidt that you linked to -- thank you for that -- it is truly horrifying. But I get the sense that this is a rather American phenomenon. I certainly didn't experience anything close to it in magnitude at my university.

[1]: And I don't mean necessarily from the results, but possibly from the process.


There are systematic biases in the social sciences, and there are systematic biases in physics. Read Lee Smolin's The Trouble with Physics. As they are social systems, they will always be biased. And the most dangerous bias is the one that claims itself unbiased, fully objective.


Physics can collect so much data that the generally accepted standard for results is five sigma. There are biases in physics, but it's much harder to delude yourself into a five sigma result than a two sigma one--in the end the right answer becomes obvious.


> Physics can collect so much data that the generally accepted standard for results is five sigma. There are biases in physics, but it's much harder to delude yourself into a five sigma result than a two sigma one--in the end the right answer becomes obvious.

That's in the case of clear-cut predictions from theory or empirical relations. But bias generally arises in a more subtle fashion than statistics on measurements, though. Even if you have a "5-sigma result", it still needs to be interpreted, and that's where biases can rear their heads. On possible manifestation (though not the only), is if one ascribes to a particular theory, one is generally biased into interpreting those results within the framework of that theory. And if one isn't careful, that can lead to diminishing the weight of evidence that conflicts with the theory (e.g., "it's 5-sigma, but those data aren't actually relevant to this issue").

As an example, one area I'm actively researching is the emission from molecules in galaxies. We see that some galaxies show enhancements in emission from a specific molecule (HCN). Everyone agrees that the enhancements exist but there's wide disagreement on what causes it. And much of the discussion around it parallels people's opinions/biases about which is more important for a galaxy: how many new stars it is forming versus how active the supermassive black hole is at the galaxy's center. The preferred interpretations for the extra HCN emission seems to correlate very well with people's prior notions* (which is not surprising). Of course, more data will eventually provide a concrete solution to this question, though what new data are collected is often chosen within an existing framework.

* - I'm sure it's true for me, even though I try to mitigate it as much as possible. But we often have trouble recognizing our own biases.


>That's in the case of clear-cut predictions from theory or empirical relations. But bias generally arises in a more subtle fashion than statistics on measurements, though. Even if you have a "5-sigma result", it still needs to be interpreted, and that's where biases can rear their heads.

I agree, interpretation is the tricky thing. The difference is that in social sciences, both the result and the interpretation are suspect.


> There are systematic biases in the social sciences, and there are systematic biases in physics. Read Lee Smolin's The Trouble with Physics. As they are social systems, they will always be biased. And the most dangerous bias is the one that claims itself unbiased, fully objective.

I never claimed physics was bias-free. I just said that political affiliation likely imparts less bias on physics than it does on sociology. And that was just one example of a possible source of bias.


But that's pretty much what Aaronson is implying. That physics can be cold, calculating, unbiased while the social sciences are the opposite.

There is plenty of dogma in the physics world as well. For example, just look at the way people defend their favorite interpretation of quantum mechanics as absolute truth.


> just look at the way people defend their favorite interpretation of quantum mechanics as absolute truth.

Yeah, but these people are rarely actual physicists. A physicist might discuss "interpretations of quantum mechanics" over lunch with a colleague in a fun, speculative way, but it makes little difference on the research they perform. More importantly, it doesn't affect the predictions that their research makes. This is not the case with the social sciences.


It absolutely does affect the predictions research makes, because funding and PhD students are assigned to whatever models are most popular at the time. For a long time that was string theory, and a lot of departments considered other approaches to be non-optimal for career progression.

Which sounds fine if you believe that heterodox but still valid science gets into the mainstream eventually - like (e.g.) the theory of continental drift did.

But what if useful science falls through the cracks? What if we could have had a valid theory of quantum gravity by now, from a left-field approach that never had funding or PhD students to research it?

How would anyone know?


Are you sure you follow actual physicists? Have you ever heard Sean Carrol talk about how anyone who doesn't believe in the MW interpretation is pretty much stupid?


I mean, yes, there are a few physicists who have turned into science popularizers, but they're no longer at the forefront of scientific research (e.g., Michio Kaku, Neil deGrasse Tyson, Brian Greene).

I do quantum chemistry research. Do you recognize any of the names: Colbeck, Renner, Alavi, Booth, Kohn, Sherrill, Gill, or Pople? Colbeck and Renner are about as philosophical as you're going to get for scientists who currently perform actual research (Tegmark and Baez are in categories of their own).


Do you put Sean Carroll in the same category as the other science popularizers?

Also, isn't quantum chemistry too applied for there to be significant arguments about specific interpretations? I only know QC as it concerns DFT or QMC (I do semiconductor modeling)


No, he still publishes quite a bit. But I'd say it's unusual to find someone who is both a researcher and a popularizer.

> Also, isn't quantum chemistry too applied for there to be significant arguments about specific interpretations?

I suppose that is mainly the point of my original post; the vast majority of researchers who utilize QM do so without regard to the philosophical aspects of the subject. Wouldn't you think Sean Carroll is an outlier in that sense?


But then I could make the same point about the vast majority of humanities academics. Most work isnt paradigm shifting, it's just building on top of currently accepted theories.

And that's my actual point. When it comes to rejecting or accepting new models, there is much dogma in physics as there is in sociology, because the underlying problem is human beings not the subject of study. Even Einstein dogmatically said "God does not play dice", rejecting QM based on his intuition about how the universe works rather than being cold and only looking at observation.

Personally, I don't think there is anything inherently wrong with dogma, it helps your research become an overdamped system otherwise you'd be wasting all your time following ultimately fruitless leads. The important thing is to use this inherent quality of humans to your advantage.


> But that's pretty much what Aaronson is implying. That physics can be cold, calculating, unbiased while the social sciences are the opposite.

Sure, and I agree that it's not necessarily true. I just wanted to point out that I was never claiming physics to be unbiased. :)


So from now on, do we present our credentials and cvs before commenting to any extent in public? No. The whole point about science is that anyone at all should be able to criticize any piece of work on the basis of the facts.

Let's suppose the criticism is rubbish. All to the good. That will emerge and so will salient criticism, if it exists, that may have escaped the 'credentialized' scientists.

Only my impression of course but reading a few entries in Shtetl-Optimized, I would have thought that Aaronson would certainly appreciate and argue for or against any pertinent comments that some unknown Joe Doe might offer.


All I meant is that there's little point in presuming the statement is correct and then asking a followup, when there's little reason to believe that it is in the first place. Aaronson doesn't support this statement, nor does he argue it -- he just asserts it. In any event, my point had little to do with credentials and much to do with knowledge. I don't think Aaronson claims he has too much knowledge in the social sciences, credentialed or otherwise. Would you also presume his medical advice to be correct?


This is argument from authority. But, in a world where we don't go through people's statements and rigorously examine the provenience of each of them, argument from authority is a valid heuristic.


Perhaps it is an argument from meta-authority, or a meta-argument from authority :) But I don't think it is. The argument is simply that there's little reason to accept a factual statement as correct if it is not supported by factual study.

The reason I'm so sensitive about this issue in particular is that I've personally been involved both with the social sciences and STEM, and I'm extremely annoyed to see people in the latter group put so much emphasis on evidence, and at the same time display such a broad dismissal of the former group without so much as a rudimentary understanding of the field(s). You hardly see historians dismiss research in solid-state physics as you see computer scientists dismiss research in sociology (which goes above and beyond the well-accepted inaccuracy of the latter due to intractability).


News to me that he is moving to UTexas in Austin to build a center for quantum computing and theory. Terribly exciting! With the impetus being a state funded allocation of $4B+ for basic research. It really begs the question: are our governments spending way too little on moon shots? And what could have been accomplished in the last two generations if the focus had been on pure science rather than military technologies?


> the focus had been on pure science rather than military technologies?

I find your question a little funny in that Aaronson has said the DOD is where much of the funding for his research will come from for the next 5 years[1].

So, the $4 billion in Texas likely funded his recruitment package, but it is the military that is funding his research.

[1] http://www.scottaaronson.com/blog/?p=2687


What constitutes a "good return" on research dollars spent? Is it mere value created? Or a breakthrough in fundamental human understanding? I'm actually quite intrigued now to mine some real data!


I don't know that's really measurable except over very long periods of time. A discovery may be "interesting but useless" for many decades until engineering catches up, whether through improvements in engineering technology or subsequent discoveries that make the first discovery more actionable.


I want to congratulate the person who invited Scott Aaronson to speak (New York university, about integrated information theory) and didn't record and publish it on youtube. /sarcasm

Seriously guys, we only have that many people like Scott. Please don't take their smarts, wisdom and time for granted.



The discussion of whether or not we have free will strikes me as confused. Aaronson suggests that we would clearly not have free will in a scenario where “Everything [we] did could be fully traced to causal antecedents external to [us], plus pure randomness—not in some philosophical imagination, but for real, and on a routine basis”. The caveat “external to [us]” is doing a lot of work here, but it really has no discernible meaning. (Is a cause “external to us” if it’s not within our brains? Why would that matter? What is the status of chains of causation that lead from the outside world to goings on in our brains?) Without that essentially meaningless caveat, the statement is simply a trivially true disjunction of the logical possibilities given physicalism. Of course, if physicalism is true, then everything that we do is either determined by the laws of physics or not determined by anything. If that entails that we don’t have free will, then Aaronson should just say that he thinks physicalism is true and that he thinks that physicalism entails the absence of free will. On the other hand, if we don’t simply assume physicalism, then the fact that our behavior is determined by the laws of physics really does nothing to argue against free will. Everything would hang on how minds interact with physical reality, and we simply know nothing about how that might work. For example, it might be that God ensures that the physical world operates so as to respect the free decisions made by our minds. Or it might be that phenomena which are random from the point of view of physics are (sometimes) linked to free decisions made by minds. Nothing is known, so nothing can be concluded.

I'd also have to say that Aaronson lacks imagination in a crucial respect. He claims that in any "imaginable" universe it will hold that "if you knew the complete state of the universe, you could use it to calculate [either deterministically or probabilistically] everything I’d do in the future." But of course it's very easy to imagine a universe where this is not the case. E.g., any universe where minds make free decisions that lack physical causes. We know that people have in fact imagined this sort of universe.


I think you're missing the point somewhat. Yes, Aaronson assumes that something like physicalism is true, which I think is fairly defensible as a 21st-century scientist. But the argument isn't as simple as "physicalism entails the absence of free will". His point is that even if our actions are (probabilistically) determined by a set of underlying physical laws, it may still be that the character of those laws is such that it is fundamentally impossible to predict future actions, in which case we would still have meaningful free will.

The main form of prediction-impossibility he mentions is that our minds might depend meaningfully on quantum states, which are governed by the no-cloning theorem and therefore impossible even in principle to extract. Whether or not this is true is an open question, but the characterization of free will in terms of the concrete question "is it possible to build a prediction machine?" seems like a useful and novel contribution that doesn't just reduce to the old question of "is physicalism true".


To follow up: I personally see some extra subtlety here, in that even if a prediction machine did exist, we could still have a meaningful form of free will, because that machine would be governed by Rice's theorem (https://en.wikipedia.org/wiki/Rice's_theorem), which says that in general the only way to predict the outcome of a computation is to run the computation. (this is closely related to the halting problem).

That is, even if I could capture the relevant aspects of your mind state on a computer, i.e., the no-cloning theorem is not a barrier, it's still the case that the only way for me to predict what you will do in two minutes' time is to actually run your mind inside the computer for two minutes of subjective time. This would constitute a prediction machine in Aaronson's sense, assuming the computer could run faster than realtime. But since the computation that occurs is literally following your thought process step by step (Rice's theorem says there are no shortcuts), there's a real sense in which the decision was not determined until you -- the version inside the computer -- actually made it.

Of course this account depends on a functionalist/computational theory of mind, which can be quibbled with in all sorts of ways. But it does seem important for the philosophical discussion on free will to at least take into account the things we now know about the nature of computation, the character of physical law, and other scientific/mathematical questions, that were not known decades or centuries ago when the philosophical battle lines were being drawn. Yes, scientists can be arrogant and are often just wrong or deeply misinformed on philosophical questions. But that doesn't remove the responsibility of philosophers to engage with the frontiers of knowledge on these issues, which means engaging with people like Scott rather than just dismissing them as "confused". (as if anyone thinking about free will has ever not been confused!)


I don't exactly disagree with you (although let me point out that I said the discussion was confused, not Scott), but I'm not sure where people get the impression that philosophers aren't engaging with physicists on these questions. The ramifications of quantum physics for free will have been discussed endlessly by philosophers, and often by philosophers who have a significant amount of training in physics.


>But the argument isn't as simple as "physicalism entails the absence of free will".

Right, I said that's probably all he should say.

>His point is that even if our actions are (probabilistically) determined by a set of underlying physical laws, it may still be that the character of those laws is such that it is fundamentally impossible to predict future actions, in which case we would still have meaningful free will.

It's hard to see how it makes any difference to my level of freedom whether someone else can successfully predict what I'm going to do. You can predict pretty reliably that I'm going to have breakfast in the morning, but it's still a free decision on my part.

>but the characterization of free will in terms of the concrete question "is it possible to build a prediction machine?" seems like a useful and novel contribution that doesn't just reduce to the old question of "is physicalism true".

It seems to me that the question of whether or not we have free well is largely orthogonal to the question of whether or not our actions are predictable. I suspect that all four combinations of predictable/unpredictable and free/unfree are logical possibilities. So I think Aaronson is conflating a lot of things that ought not to be conflated, and which have been endlessly and subtly picked apart in a philosophical literature that he appears not to have bothered reading. (I haven't read that much of it either, but it already contains pretty much every idea about free will that is likely to occur to anyone in our present state of knowledge.)


It feels like this is mostly a disagreement over language. We all agree that determinism and "predictability" are independent questions. You've equated "free will" with non-determinism and therefore see predictability to be orthogonal, whereas Aaronson equates "free will" with non-predictability, making determinism orthogonal though I imagine he'd argue for it on other grounds. It's not clear that there's an actual disagreement over anything except which words to use for which concepts.

FWIW, I think Scott has read more of the philosophical literature on free will than probably anyone else in the world with a comparable level of quantum-physical expertise. If he's too confused to be allowed to discuss these issues, then what is the path by which philosophers expect to stay relevant and up-to-date with new knowledge? A field whose currency of legitimacy is "you must have devoted your life to understanding every detail of dispute in the existing literature" is not a living field that hopes to learn from outsiders and grow with new scientific inquiry. I think the best philosophers recognize this, and are willing to engage more broadly.


I wouldn’t equate free will with physicalism. It’s possible that physicalism might entail the absence of free will (I don’t claim to know the answer to this question), but it’s pretty clear that the entailment doesn’t go in the other direction.

I’m a bit frustrated here, because physicists often seem to retreat to this position that it’s all just a question of terminology once people start pushing back against their philosophical claims. If Aaronson actually is using terms in such a weird way as you suggest, then I just have no idea what he is on about. We already have a perfectly fine word that means unpredictable; there’s no need to repurpose “free”.

I admit that I have no way of knowing what he has or hasn’t read, and my comment on that was snarky. But he doesn’t refer to any philosophical work in the interview, and I couldn’t find much published work by him that really engages with the current literature.


I think Aaronson believes the following. If there are provably no experiments that could differentiate a propery X from "free" then X is equivalent to "free." I think unpredictability passes that test.


Hmm, that seems like a fairly banal observation. Of course you can't tell whether someone has free will looking from the outside. In the general case the behavior of someone with free will will be indistinguishable from random behavior. If free will was just about patterns of behavior then it wouldn't be a very interesting philosophical issue.


Here's the paper he wrote on it, I think it'd be good to take a look at it directly.

https://arxiv.org/abs/1306.0159

Scott is making a pair of claims: If it's possible to build a fast, noninvasive, perfect predictor then what more can science say to show that people don't have free will?

If it's impossible to build such a predictor, for some fundamental reason, then what more can science say to show that free will exists.


Ah, in the paper he says that he's not actually talking about free will. That makes it less confusing. But I still think he's making much too strong a connection between Knightian unpredictability and free will. I don't see any reason at all to think that the former is a necessary condition for the latter, and he doesn't really attempt to argue the point. (He promises that he's going to argue it in section 2.5, but then section 2.5 ends up only talking about problems of personal identity.)


>Of course you can't tell whether someone has free will looking from the outside.

Before quantum mechanics and uncomputability, the "of course" went the other direction.


I don't see how that makes any difference. Could you expand?


I don't think it's obvious that you can't tell whether someone has free will from the outside. Considering the enormous success physics has had with explaining every other phenomenon from the outside, that seems like a very strong statement.


How would you distinguish decisions that were freely made from decisions made at random?


If we can't, then that says something doesn't it?


I just don't see how. It seems like a fairly obvious point that the two systems would be indistinguishable from the outside.


In that case, the pragmatists would say that the distinction isn't meaningful, that free will is a concept with no meaning, and end the debate there.


I don't think it's plausible to maintain that the question of whether or not we have free will is "meaningless". Any criterion of meaningfulness that derives that conclusion is probably faulty.



The caveat “external to [us]” is doing a lot of work here

I think it's just sloppy discussion.

Any hard-determinist will say that every interaction is pre-determined based on a previous interaction. The compatibilist approach tries to reconcile hard-determinism with the illusion of free will in a reference frame of the individual - but I think falls apart. I think Aaronson is trying to convey this particular philosophy.

"if you knew the complete state of the universe, you could use it to calculate [either deterministically or probabilistically] everything I’d do in the future."

Fortunately for us, Godel explored this exhaustively and concluded (though did not prove) that this is not possible. See also the Munchhausen Trilemma of proofs for problems proving any epistemological inquiry.

[1]https://en.wikipedia.org/wiki/Hard_determinism

[2]https://en.wikipedia.org/wiki/Compatibilism

[3]http://plato.stanford.edu/entries/goedel-incompleteness/

[4]https://en.wikipedia.org/wiki/M%C3%BCnchhausen_trilemma


> Godel explored this exhaustively and concluded (though did not prove) that this is not possible

While I agree that Aaronson's take on philosophy is not very original or profound, I don't think that Gödel concluded anything of the sort, and if he said something in that vein (assuming you're referring to the discussion on mechanism in the link you're provided), it is more an expression of his faith that any problem can be solved (and since not every problem can be solved by a machine, the mind is more powerful than any machine) -- hardly a conclusion.


Sharp response! Yes, the "external to us" caveat is very powerful because it is contextually ambiguous and its scope can potentially reach the location of every atom in the universe. Every atom in the universe is not trackable because they are not self-tracking. The only way out is retreat into, "we're in a simulation..."


Yes, very well put. I really enjoyed his technical answers, but when he dives into the philosophy I was disappointed to see the same very naive and narcissistic attitudes so many scientists have (that are even alluded to at the beginning of the article): you don't need to try to understand philosophy, just use your superior STEM brain and technical intuitions and if you think about it for a few minutes, with a healthy splash of "assuming my thesis, it is obvious my thesis is correct" , the problem is trivial -- why have people been thinking about it for centuries?


He was asked about his opinion on free will and he gave it. I don't see where he is disdaining philosophy or saying his explanation is superior to anything.


Agree completely. I don't think you deserve to be downvoted for saying that. Everything Aaronson's saying on this topic has been exhaustively discussed in the philosophical literature.


I don't think I deserve it either but I am not surprised or particularly offended. Pop scientists like Dawkins and NDT are rightly mocked when they say things like this because we can safely feel superior to them, but a tech golden boy like Aaronson can do the exact same thing and since he knows about quantum and does computer science it doesn't matter what he says or how far out of his depth he ventures any comment about him which doesn't border on hero-worship is verboten.

Downvotes or no it's worth saying. Defending any social science or humanity at all is hard work on a forum like this, but I think it's important.


> [I]f you want me to rush to the Singularity community’s defense, the way to do it is to tell me that they’re a weirdo nerd cult that worships a high-school dropout and his Harry Potter fanfiction, so how could anyone possibly take their ideas seriously?

Ha, good one! I really love his long rambling answers, very interesting.


For those reading comments before the article: the title is rather silly, but it contains lots of interesting stuff. For example the answers to questions 14 and 15 sound very sane to me, compared to most people's views on it.


I agree:

"I think that, if civilization lasts long enough, then sure: eventually we might need to worry about the creation of an AI that is to us as we are to garden slugs, and about how to increase the chance that such an AI will be “friendly” to human values (rather than, say, converting the entire observable universe into paperclips, because that’s what it was mistakenly programmed to want)."


I'd recommend Scott Aaronson's book Quantum Computing Since Democritus to anyone interested in quantum computing, mathematics, and computer science in general. It covers a vast array of material, from the basics of set theory to Godel and Turing, cryptography, quantum computing, free will and time travel. It doesn't go very in depth into any specific topic, but for an engineer who's been out of college for a few years, it was at a perfect level to rekindle a love of learning and interest in a variety of topics.


> I’m friendly with many of the people who spend their lives that way [thinking about how to transfer consciousness into a computer]; I enjoy talking to them when they pass through town (or when I pass through the Bay Area, where they congregate).

^ I think that's my favorite quote from his interview. The Bay Area certainly has a reputation.


>6. What hype about quantum computers really drives you nuts?

Was good for me. I've usually found articles about quantum computing not really seeming to make sense and his reply is good for explaining why that often is.


Does anyone know why he's leaving MIT for UT Austin?


He has a blog post about it - http://www.scottaaronson.com/blog/?p=2620


> I wasn’t forced to leave MIT over anything here on Shtetl-Optimized.

> Bizarre as it sounds, CS departments mostly cared about what actual research we were doing and could bring to them! So students and faculty afraid to debate anything controversial online under their real names, however politely, should know that even in 2016, the banner of academic freedom yet waves.

I wish we could say the same about private industry -- most notably our employers and our funders.

I am reminded of this:

>I think the web has made us all write more defensively, and it’s a shame, because we’re effectively contorting our communication style to defend against a small minority of mean-spirited and uncharitable actions by some. Actually, as I say that, I instinctively feel the need to hedge myself–I don’t believe that people are really mean-spirited (well, perhaps some are–gak I’ve done it again!), but there’s something about commenting about stuff on the internet with people you’ve never met that seems to bring out the worst in people. (https://pchiusano.github.io/2014-10-11/defensive-writing.htm...)


Maybe I should've just asked the question I really wanted to ask: did MIT deny him tenure? If so, why? From what I've heard, Scott Aaronson is one of the rising stars of theoretical CS, so him moving on from MIT was very surprising to hear.



Nice interview: Nice clarity on quantum computing and P versus NP.


But be careful, his interpretation of P versus NP is rather light-hearted (but OK, given that it's not his speciality). He says:

For example, breaking almost any cryptographic code can be phrased as an NP problem. So if P=NP—and if, moreover, the algorithm that proved it was “practical” (meaning, not n1000 time or anything silly like that)—then all cryptographic codes that depend on the adversary having limited computing power would be broken.

But the parenthetical remark is not a negligible detail. The class P contains those O(n^1000) algorithms, and also much, much slower ones. Being in P does not mean at all "efficient". This is one of the reasons why many people (e.g. Knuth) say that P=NP is not crazy. The class P is so huge, that it must certainly contain extremely clever algorithms that reduce NP to P. Yet, there are may be no practical implications of this equality.


Yes. So far nearly all our famous algorithms run in ln(n), n, ln(n)n, or n^2. So, we don't have much experience with algorithms that run in n^1000 or insight into what such algorithms might do.

On the other hand, too commonly when we can't find an algorithm that runs in, say, n^2 or faster, we do have algorithms that run in 2^n. So, we suspect that there is something fundamental about exponential and fundamentally weak about polynomial. But as you point out, that is just something we detect sniffing with our nose.

The point where I critique the P versus NP issue is the claim, implicit or explicit, that if we have a practical instance of a problem in NP-complete, then that instance has to be too hard to solve in practice. Not necessarily so: Many particular instances of an NP-complete problem, even with what appear for practice to be quite large values of n, might be fairly easy to solve. E.g., for an instance of the knapsack problem (IIRC in NP-complete), attack with a cute version of dynamic programming. For 0-1 a practical instance of integer linear programming (in NP-complete), attack with the simplex algorithm, maybe branch and bound, maybe Lagrangian relaxation, etc.

And, a related issue for practice, in some optimization problems, the goal is to save money. So, in practice it can be fairly easy to save the first 15% of everything that is being spent and save all but that last $0.01 that can be saved even in theory (that is, come within a penny of optimality) and a total pain to save that last penny and know that we have done so.

The OP said that if we have an algorithm that shows P = NP, then for each of the other Clay Math problems, say, problem X, and for positive integer n, we can ask: Is there a string of n or fewer characters that, in the sense of Whitehead and Russell, is a proof of problem X and let our P = NP algorithm answer that. Well, we'd be super happy for just anything that would give us such an answer for just one case of problem X and just a realistic value of n and to heck with the general case.

More generally that is, the theory of NP-complete concentrates on worst case instances of problems, and not all real instances are worst case or nearly so.

So, for the practical instances of problems in NP-complete that we can solve, exactly or close enough to save nearly all the money, just do so, take the money to the bank, and be happy. Then, sadly, we have to notice that currently in practice there is surprisingly little interest in such problems and solutions. Heck, someone could take a collection of the current, routine software for solving instances of problems in NP-complete, advertise (falsely) that they have an algorithm that shows that P = NP and also have corresponding software, run their software in the cloud, invite people to submit problems, and charge big bucks for solutions. Then, get paid for the practical instances their old, routine algorithms do solve and make some excuse for the rest -- money back guarantee. Problem is, I doubt that very many people much care.

Why do I suspect that people don't care? Because I've see several important practical cases, and there people didn't much care.

But, I confess, there is a lot of misunderstanding out there. E.g., once I was talking with some people who needed to solve, exactly, although approximately would be good, too, a lot of instances of some 0-1 integer linear programming problems (definitely in NP-complete). So I explained that I had recently solved an instance of a 0-1 integer linear programming that had 40,000 constraints and 600,000 variables. My solution was in 905 seconds on a 90 MHz PC, and the feasible solution found was within 0.025% of optimality and might have been optimal.

The people I was talking to were happy? Nope: They had been told about the difficulty of the NP-complete problems, heard the big numbers 40,000 and 600,000, and concluded that I had to be lying. I wasn't lying. It's just that, while 0-1 integer linear programming is in NP-complete, not all instances of large 0-1 integer linear programming problems are difficult to solve; instead, in practice, a lot of instances are quite reasonable to solve exactly or plenty close enough for essentially everything of interest in practice. Right, 0-1 with 600,000 variables, so for total enumeration we're looking at 2^600,000. Gotta be impossible, right? Nope. Instead it was fairly easy.

So, it boils down that the question of P versus NP is, say, except for the $1 million Clay prize, a pure math question heavily of long term philosophical interest instead of an engineering problem of current practical interest. For current practical interest, we should be attacking practical instances of problems which with current algorithms, software, and computing we can often do quite well.


Wow he seems very smart but not very succinct.


It is an incredibly long interview, but I assume it was e-mail. And full of short gems like:

"QM isn’t even “physics” in the usual sense: it’s more like an operating system that the rest of physics runs on as application software"


"Speaking of which, when I look at the thrilling advances being achieved today in AI, I see all sorts of ethical issues that will need to be dealt with soon—like, how can a deep neural network justify to you why it turned down your loan application? Should self-driving cars handle crashes using utilitarian or deontological ethics?"


that's a line straight out of his book 'quantum computing since Democritus'. highly recommended if you want to get a bird's eye view of the foundation of quantum logic without all that pesky physics (and physical history) getting in the way.


I've actually read it, I just didn't remember this line. Thanks. And I agree, great read.


I'm fairly certain that was not one of the defined goals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: