Hacker News new | past | comments | ask | show | jobs | submit login

It seems pretty likely that we will start to see "General AI" discovering problematic things like the unsolvable nature of ethics questions, the ungroundedness of truth claims, the immense silliness of religions and ideologies, etc.

Which might be a good thing. Terrifying visions of AI are all about certainty and the authoritarianism it creates.

If existential depression is a mental friction from the lack of certainty about what to do, then some measure of it is probably necessary...




>Which might be a good thing. Terrifying visions of AI are all about certainty and the authoritarianism it creates.

Indeed, may the gods protect us all from some things actually being true and other things actually being false. That would be terrible!


(I sense/assume a missing /s in your post, Eli.)

The objection here would be that entertaining that we can assert T/F of all propositions, given results of halting problem [computation], incompleteness [formalism], and uncertainty [physics], is unreasonable.


>The objection here would be that entertaining that we can assert T/F of all propositions, given results of halting problem [computation], incompleteness [formalism], and uncertainty [physics], is unreasonable.

This displays a radical misunderstanding of the phenomena mentioned. The Halting Problem and logical incompleteness are the same thing underneath, and while they do hold in all formal systems, this never actually matters for non-meta-level mathematics. Basically any theorem about a structure we actually care about will be sub-Turing-complete, and with modern type theories, we can tear down inductive types and rebuild them with stronger axioms when we need to. As a result, some self-referencing theorems are true-but-unprovable, but we never actually need those theorems.

Uncertainty in physics is either just probabilistic (in the case of typical experiments), or, in the special case of Heisenberg uncertainty... no actually, that's just probabilistic imprecision too. That's what Heisenberg's inequalities actually say: "the product of the standard deviations of these measurements must always be at least this much."

Nowhere are we encountering the kind of radical, existential "we can't know anything and had better give up" uncertainty without which /u/mbrock seems to think we will all fall into political authoritarianism. He's been reading too many liberal philosophers of World War II, or maybe just watched that BBC thing "Dangerous Knowledge" and took it seriously.


It seems irrelevant that, as of now, "any problem we care about" is sub-Turing-complete. We're still taking baby steps as a species and the relevant timelines in context of this thread are futuristic. Agreed?

The point is that to maximally insist that "we can't know anything", and/or, "we can know everything" are both equally unreasonable positions.

> Nowhere are we encountering the kind of radical, existential "we can't know anything and had better give up" uncertainty without which /u/mbrock seems to think we will all fall into political authoritarianism. He's been reading too many liberal philosophers of World War II, or maybe just watched that BBC thing "Dangerous Knowledge" and took it seriously.

S/he can address your concerns -- that was not my intent, but hopefully you agree that unreasonable insistence on maximalist positions inherently carries the danger of "political" authoritarianism.

Your practical position reminds of a purported exchange between Wittgenstein and Turing, per Hewitt [ref]. I am sympathetic to it. But reminder again that my initial comments here were in context of intelligent machines. In fact as I am writing this little note, I am entertaining the thought that a veritable intelligent machine may in fact review the 3 (methodological) constraints noted, and shrug it off as practically unimportant. Or it may become alarmed. :)

[edit/p.s. ref:http://lambda-the-ultimate.org/node/4302]


>It seems irrelevant that, as of now, "any problem we care about" is sub-Turing-complete.

It's not irrelevant, it's a question of how you view the Church-Turing-Deutsch thesis: can hypercomputation occur in the physical world? If it can, then why and how are we somehow blocked from utilizing its physical manifestation to reason about the relevant questions of real-world events? If it can't, then why aren't systems with finite, if large or even growing, Kolmogorov complexity thus sufficient for all reasoning about the real world?

>S/he can address your concerns -- that was not my intent, but hopefully you agree that unreasonable insistence on maximalist positions inherently carries the danger of "political" authoritarianism.

My objection has been precisely that to insist on radical ignorance has led to an obnoxiously enforced liberalism.

>Your practical position reminds of a purported exchange between Wittgenstein and Turing, per Hewitt [ref].

I think Turing is just plain wrong here. Real mathematics did not originate by receiving ZFC with first-order logic as a revelation at Sinai and extrapolating theorems and structures from there! It began by formalizing ways to reason about the real world around us. When those original informal methods became insufficient in the foundational crisis of the 19th century, then mathematicians started inventing foundations to unify everything without paradox.

Notably, someone in the comments thread then mentions Curry's Paradox, which I looked up and found surprisingly underwhelming. Curry's Paradox is a perfect example of what Wittgenstein called a meaningless language game! Classical implicature X -> Y isn't always equivalent to the existence of a causal path "X causes Y", but natural-language implicature mostly talks about causal paths, so conflating the two in symbolic logic derives a "paradox".

>In fact as I am writing this little note, I am entertaining the thought that a veritable intelligent machine may in fact review the 3 (methodological) constraints noted, and shrug it off as practically unimportant. Or it may become alarmed. :)

I don't think that a machine limited to reasoning within a single mathematical structure or foundation can really qualify as "intelligent" in the human sense. Logical foundationalism is the wrong kind of thing to constitute a capacity to think.


I never meant to imply "we know nothing, we had better give up."

What I meant was that the dystopian imaginations of AI (Terminator, paper clip maximizers, etc) involve the machines acquiring a strong moral certainty that doesn't ever pause for doubt.

If I've been reading too much of somebody it's probably Richard Rorty.


>What I meant was that the dystopian imaginations of AI (Terminator, paper clip maximizers, etc) involve the machines acquiring a strong moral certainty that doesn't ever pause for doubt.

Ah. Well, considering that acquiring such a thing wouldn't really work out mathematically, fair enough.


The encounter with doubt need not be a terminal state. If it is indeed "intelligent", it may proceed to enlightenment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: