Hacker News new | past | comments | ask | show | jobs | submit login
Proofs, Computability, Undecidability, Complexity, and the Lambda Calculus [pdf] (upenn.edu)
135 points by bmc7505 on April 8, 2021 | hide | past | favorite | 72 comments



This is but one of many comprehensive texts about mathematics written Jean Gallier and Jocelyn Quaintance! They are all a little unpolished, but still an invaluable resource to students!

Here are some more book titles copied from his website: (https://www.cis.upenn.edu/~jean/home.html)

* Linear Algebra and Optimization with Applications to Machine Learning (html)

* Differential Geometry and Lie Groups (html)

* Homology, Cohomology, and Sheaf Cohomology (html)

* Proofs, Computability, Undecidability, Complexity, and the Lambda Calculus. An Introduction (pdf)

* Aspects of Harmonic Analysis and Representation Theory (html)

* Algebra, Topology, Differential Calculus, and Optimization Theory for Computer Science and Machine Learning (html)

* Aspects of Convex Geometry Polyhedra, Linear Programming, Shellings, Voronoi Diagrams, Delaunay Triangulations (html)

* Notes on Primality Testing and Public Key Cryptography Part 1: Randomized Algorithms Miller-Rabin and Solovay-Strassen Tests (08/2017) (pdf)

* Spectral Graph Theory of Unsigned and Signed Graphs Applications to Graph Clustering: A Survey (12/2014) Posted to arXiv as paper cs.CV arXiv:1311.2492 (pdf)

* Introduction to discrete probability (12/2014) (pdf)


Such breadth, you could dedicate a lifetime to learning only one of those topics!


I tried to access "Differential Geometry and Lie Groups" but it seems there is only the table of contents.


Wow! There is a lot there to read.


I'm looking at this now. It's a bit idiosyncratic but looks good. It covers standard topics in proof and recursion theory but from a more contemporary approach using natural deduction instead of Hilbert-style proof systems, and it also does complexity theory with various computation models. There are a fair number of exercises and problems, always a good thing. But it covers specialized topics at the expense of some standard ones. For example, I didn't spot any mention of the compactness theorem. So it's hard to call the book a usable general introduction to logic, despite it starting at a relatively beginner level.

It's a draft and it has some typos and some long-winded passages that could use editing, but I'm glad it is out there, and it looks to me like a good way for computer people to learn some logic and computability theory if that strikes their fancy. I've only looked at the first couple chapters but the TOC mentions that Girard's polymorphic lambda calculus makes an appearance, so there must be some type theory in there. But, my preference would be to increase the coverage of that even if it means trimming back some of the computability stuff. Another improvement might be to include some coverage and exercises in mechanized proof checking using Coq, in the spirit of Software Foundations (also by authors at Penn, https://softwarefoundations.cis.upenn.edu/ ).

Thanks to whoever posted this link!


Could you provide a couple of alternative recommendations in contrast with this? That would be very useful!


Well, it depends on what you are looking for, but the traditional-style book that I'm used to is H. B. Enderton, "A Mathematical Introduction to Logic", and Enderton also has an accompanying book on set theory. I guess I'd suggest those as a companion rather than an alternative to the Gallier and Quaintance book. The G&Q book is imho more modern and more interesting from a CS-oriented perspective. It just leaves out too many important fundamentals. Alternatively, since it is still a work in progress, maybe some of the missing stuff could be added.

Boolos and Jeffrey "Logic and Computability", whatever the current edition is, is supposed to be good, but I haven't read it yet.

The G&Q book is weird in that it is about interesting topics that I'd consider slightly advanced, but written in a way that assumes almost no background, so if it's the only thing you read, you'll still be missing important background. I'd like a book that uses the G&Q approach but that includes the background. If it kept all the current contents and was expanded, it would probably end up as 2 volumes, which is fine. Or it could leave out some of the specialized stuff, but I think that stuff is there because that was what the authors wanted to write about, so I won't judge ;-).


Can't edit to add this, so will post additional reply of an MO thread with a bunch of other recommendations. I will look into some of them myself.

https://mathoverflow.net/questions/44620/undergraduate-logic...


Unfortunately, the text by Gallier and Quaintancis is somewhat obsolete because it misses the following developments:

* The Church/Turing Thesis is false for reasons outlined here:

https://professorhewitt.blogspot.com/2021/03/the-churchturin...

* Gödel failed to prove inferential undecidablity in foundations for reasons outlined here:

https://professorhewitt.blogspot.com/2021/03/godel-failed-to...

* That "theorems of an order can be computationally enumerated" is true but unprovable for reasons outlined here:

https://professorhewitt.blogspot.com/2021/03/churchs-dilemma...

However, the text by by Gallier and Quaintancis does provide historical perspective.


Let ◻ be the modal operator "It is provable that", so ◻P means that P is provable.

The first part of your argument in the second link says:[

Suppose that there is a proposition UNP such that UNP ⇔ ¬◻UNP .

Then, suppose that ¬UNP . It then follows that ¬¬◻UNP , i.e. that ◻UNP.

Therefore, ◻◻UNP .

Then, as UNP ⇔ ¬◻UNP, therefore ¬UNP ⇔ ◻UNP.

Then, replace the ◻UNP in ◻◻UNP with ¬UNP, and therefore conclude ◻¬UNP .

So, at this point, we have concluded that ◻¬UNP and also that ◻UNP. So, you can conclude that, __under these assumptions__, that ◻⊥ .

So, under the assumption that there is a statement UNP such that UNP ⇔ ¬◻UNP , and the additional assumption that ¬UNP, you can conclude that ◻⊥.

]

Indeed.

If a system admits a statement UNP, (and the system is strong enough to do the appropriate reasoning) (and peano arithmetic and such does admit such a statement UNP), then within the system, you can show that ¬UNP implies ◻⊥. I.e. "if the statement that claims it cannot be proven, can be proven, then the system proves a contradiction". This is entirely normal, and does not overturn anything.

However! While the assumption that ¬UNP allows us to derive ◻⊥, it does not allow us to derive ⊥. We have __not__ shown that ¬UNP → ⊥, we have only shown that ¬UNP → ◻⊥. Therefore, this is not justification in appealing to proof by contradiction, and concluding UNP (nor ◻UNP).


I've not read ProfHewitt's argument, so I'm just looking at your summary.

If ◻P means "P is a proposition that is formally provable within some implicit formal system" (perhaps there should be a subscripted letter to show which formal system because there are an infinite number of them), then I don't understand what ◻◻UNP means, because ◻UNP is not a proposition of the implicit formal system.


If we were doing this in Peano Arithmetic using Gödel numberings, then ◻P would be expression by taking the Gödel number encoding P and applying a predicate that expresses provability to that number. This might be written as like, Provable«P» with the quotes indicating taking the Gödel number of the proposition P, in order to encode it in a way that Peano arithmetic can talk about. (Usually one uses the right angle brackets on the upper corners for the quoting, but my phone doesn’t have that on the keyboard, which is why I’m using “«“ and “»”). Statements like this are statements in the same formal system as P.

The modal logic known as “provability logic” is a nice abstracted system which captures how these things work, and allows for dealing with them in a general case.

◻◻P is perfectly sensible, and if the system is able to describe itself sufficiently well (which is not a high bar; Peano arithmetic satisfies this requirement.) then if you can derive P within the system (without additional assumptions), then, simply by including a proof that the proof of P is a proof of P, one can produce a proof in the system of ◻P, And so, if one can derive ◻P, so too can one derive ◻◻P .

(ProfHewitt’s post also uses basically the same thing as ◻◻P , though he writes it using the turnstile symbol rather than the square symbol, and I prefer the square symbol for it, and the turnstile for something slightly different. Not saying the use of the turnstile for it is wrong though.)


Unfortunately, the Gödel number of a proposition does not

represent the order of the proposition.

Consequently, the [Gödel 1931] proposition I'mUnprovable

cannot be constructed in foundations because the Diagonal

Lemma used in [Gödel 1931] does not work for propositions

with orders.

Provability Logic is built on the

existence of the proposition I'mUnprovable :-(


You’re free to use a system that has orders of propositions, and I’m sure there are interesting things to be said about systems with such orders. (The set theory of NFU seems rather appealing, and stratified formulas seem somewhat analogous.)

(And yes, if you restrict what a system can do, it is possible to produce a system which can, in a sense, prove its own consistency. Dan Willard produced one such system. It has subtraction and division as fundamental rather than addition and multiplication.)

However, a theory is not required to have orders of propositions (Or, perhaps you might prefer saying this as “A theory can have all of its propositions be if the same order”?).

Furthermore, in a formal system modeling the mathematical properties of a syntactical system (as in, a set of rules describing allowed transformations on some strings), this also does not require having multiple orders of propositions.

And modeling what things are provable in a given formal system, is just that same kind of thing.

So, when describing in a formal system which well-formed-formulas can be derived within that system, it is not necessary (in the sense of “you don’t have to do it.”) to use multiple orders of propositions.

(Of course, in any formal proof of a wff which we interpret as having a meaning, there is perhaps a kind of gap between the string of characters which is provable within the system, and the thing which we interpret it to mean. However, if the system is sound with respect to our interpretation of it, then the things it proves will, under that interpretation of the system, correspond to a meaning which is true. As such, it is usually safe to elide the distinction between “the string corresponding to this proposition is derivable in this formal system” and “this system proves this proposition” (where “this proposition” is taken to refer to some meaning).

When interpreting statements made in a system “about” that system, there are kind of two levels of interpretation, kinda-sorta. First, when we interpret a proof within the system that some statement(s) is(are) (not) provable in the system, we first have to interpret the string we have derived as corresponding to the meaning of a claim about the system, namely, a claim about what strings can be derived in the system. At that point, we also interpret what those strings that we interpret the first string as referring to, would mean. This can perhaps sometimes be a little confusing. Keeping this distinction in mind should make it clear why there is no need for orders of propositions when dealing with a system referring to what it can and can’t prove.)


Orders on propositions are crucial for the consistency of

foundations for reasons explained in the following article:

"Epistemology Cyberattacks"

https://papers.ssrn.com/abstract=3603021


Are you claiming that (e.g.) ZFC (which does not have orders for propositions) is not "foundations", or that it isn't consistent?

Or by "foundations" are you referring to a particular system you are proposing as a foundational system, and which you have named "foundations"?

You appear to justify the argument on the basis of the idea of a liar sentence.

As exemplified in NFU , it is not necessary to give strict orders to things, as long as you put restrictions on how things are constructed. TST has linearly ordered types, but NFU has no need to introduce these types and orders, as just requiring that formulas be stratified is sufficient.

There is no liar sentence in Peano Arithmetic. It isn't a well-formed-formula. Partitioning propositions into orders is not needed in order to prevent it being a well-formed-formula. Just, don't include anything in your rules for what counts as a wff which would let you define it.

(you may object that, what if one just does the Godel numbering thing to do some quine-ing, and uses that to produce a liar sentence, but you can't express "The proposition [some number] encodes, is false" in PA (see Tarski's undefinability theorem) .)

It isn't like UNK is defined in PA as "a statement UNK such that UNK iff not(provable('UNK'))". That wouldn't be a wff in PA. Rather, it is some long expression involving a bunch of quantifiers over natural numbers, and also a bunch of large numbers, and a bunch of arithmetical relations, and happens to be such that one can prove (in PA) that [UNK iff not(provable('UNK')] .


Excellent questions!

In order to be rigorous, a mathematical abstraction needs to

be characterized up to a unique isomorphism. However, ZFC

does not meet the requirement, because ZFC does not

characterize sets up to a unique isomorphism.

However, there is a theory that characterizes Ordinals up to a

unique isomorphism that as explained in the following article:

"Theory Ordinals can Replace ZFC in Computer Science"

https://papers.ssrn.com/abstract=3457802

Orders can be used to block many different contradictions

based on attempted recursive definitions (beyond just The

Liar) as explained the following article:

"Epistemology Cyberattacks"

https://papers.ssrn.com/abstract=3603021

The Gödel number of a proposition does not work in

foundations because it does not represent the order of the

proposition and because there are uncountable propositions in

foundations. Criteria for foundations are proposed in the

following article:

"Information Security Requires Strongly-Typed Actors and Theories"

https://papers.ssrn.com/abstract=3418003


When appeling to authority, could you provide any sources that have not been written by yourself?


In his defence, it isn't a case of the 'usual' HN appeal to authority fallacy https://en.wikipedia.org/wiki/Carl_Hewitt


Thanks exdsq!

The Wikipedia article on Carl Hewitt is way out date!

And there is no way to fix it :-(

Consequently, the Wikipedia article should be deleted in

order not to mislead readers.


The link to the article is not an appeal to authority.

Instead, the article is where you can learn more about the

topic under discussion.

There are many references in the article that provide

additional background.


> If ◻P means "P is a proposition that is formally provable within some implicit formal system" (perhaps there should be a subscripted letter to show which formal system because there are an infinite number of them)

It doesn't mean that. The formal system is explicit: it's the system we're using to write statements like `◻P` (it's a form of modal logic https://en.wikipedia.org/wiki/Modal_logic ).


Thanks Chris!

It would be perfectly reasonable to be more explicit and use a

subscript name for the theory after the ⊢. In fact, when

more than one theory is being used, it is necessary to the

use the subscript to avoid ambiguity.

Unfortunately, many modal logics do not allow the subscript.


Excellent question Bloak!

The proof is for a foundation that can formalize its own

provability. Consequently, ⊢⊢I’mUnprovable means that

⊢I’mUnprovable is provable in the foundation.


Prove I’mUnprovable using proof by contradictions as follows:

    In order to obtain a contradiction, hypothesize   
    ¬I’mUnprovable. Therefore ⊢I’mUnprovable
    (using I’mUnprovable⇔⊬I’mUnprovable).  Consequently, 
    ⊢⊢I’mUnprovable using ByProvabilityOfProofs 
    {⊢∀[Ψ:Proposition<i>] (⊢Ψ)⇒⊢⊢Ψ}. However,
    ⊢¬I’mUnprovable (using I’mUnprovable ⇔⊬I’mUnprovable), 
    which is the desired contradiction in foundations.
Consequently, I’mUnprovable has been proved to be

a theorem using proof by contradiction in which

¬I’mUnprovable is hypothesized and a contradiction derived.

In your notation, the proof shows that following holds:

    ¬I’mUnprovable ⇒ ⊥
Why do you think that the proof is for the following?

    ¬I’mUnprovable ⇒ ⊢⊥


Because ◻P and ◻¬P together imply ◻⊥, not ⊥.

Edit: if you had as an axiom, or could otherwise prove within the system, that ¬◻⊥, I.e. that the system is consistent, then you could conclude from ◻P and ◻¬P that ⊥, by first concluding ◻⊥, and then combining this with ¬◻⊥ .

And in this case, you could indeed say that this is a contradiction, and therefore reject the assumption of ¬UNK, and then conclude UNK without assumptions.

So, if one could show ¬◻⊥ (or if the system had it as an axiom), the reasoning would go through. (This is the thing that is missing.)

Therefore, you could then prove ◻UNK, and therefore ¬UNK, and therefore (having already shown UNK) would have a contradiction, ⊥.

So, this is a way of showing that, if you can show ¬◻⊥ (and if there is a statement UNK), then the system is inconsistent. Which is just one of Gödel’s theorems: a strong system can’t prove its own consistency without being inconsistent.


But the proof shows the following:

    ¬I’mUnprovable ⇒ ⊥
By the way, because the proposition I'mUnprovable does not

exist in foundations, it is OK for foundations to prove

their own consistency as follows:

Consistency of a theory can be formally defined as follows:

Consistent⇔¬∃[Ψ] ⊢Ψ∧¬Ψ

Contra [Gödel 1931], a foundational theory can prove its own

consistency as shown in the following theorem:

Classical theorem. ⊢Consistent

    Classical proof. In order to obtain a contraction, 
    hypothesize ¬Consistent. Consequently,
    ¬¬∃[Ψ] ⊢Ψ∧¬Ψ, which classically implies ∃[Ψ]⊢Ψ∧¬Ψ by
    double negation elimination. Consequently, there is a 
    proposition Ψ0 such that ⊢Ψ0∧¬Ψ0 (by eliminating the 
    existential quantifier in ∃[Ψ]⊢Ψ∧¬Ψ). By the principle 
    of ByTheoremUse {⊢∀[Φ] (⊢Φ)⇒Φ} with Ψ0 for Φ, Ψ0∧¬Ψ0, 
    which is the desired contradiction.
However, the proof does not carry conviction that a

contradiction cannot be derived because the proof is valid

even if the theory is inconsistent. Consistency of the

mathematical theories Actors and Ordinals is established by

proving each theory has a unique-up-to-isomorphism model

with a unique isomorphism.

See the following for more information:

"Epistemology Cyberattacks"

https://papers.ssrn.com/abstract=3603021


ByTheoremUse is not a valid principle, at least for any system that includes Peano Arithmetic. (It is essentially the assumption that the system is not only consistent, but sound. It is therefore no surprise that ByTheoremUse would imply Consistent.)

By Löb’s theorem, ByTheoremUse would imply that for all propositions P, that P holds. I.e. it implies ⊥.


ByTheoremUse goes all the way back to Euclid.

It says that "a theorem can used in a proof."

It is part of many modern logic systems.

Löb’s result depends on Gödel numbers, which are invalid in

foundations because the Gödel number of proposition does not

include the order of the proposition.

ByTheoremUse works fine for Dedekind's axiomatisation of the

natural numbers using unaccountably many axiom instances :-)


Perhaps I misunderstood what you mean by ByTheoremUse .

If you mean that, if in a context/environment with certain givens, one can derive a conclusion, then one can apply that in other cases, Or if you just mean modus ponens, or cut elimination, Then ok, that’s fine. That’s valid. (Though it doesn’t justify the step you cited it in.)

But you can’t, within the system, go from ◻P to P. That isn’t a valid rule of inference.

There’s a distinction between “therefore P” and “therefore ◻P”, and you cannot use the latter as the former.

You seem to equivocate better “P is provable” and “therefore P”

Like, suppose I was writing a computer program in a strongly typed language, and something required an argument of type A->B , and I tried to pass in a string which has the text of a function with that type. Obviously that wouldn’t type check.


TheoremUse does indeed mean (⊢Φ)⇒Φ in powerful foundational

theories. TheremUse is not a problem because the [Gödel

1931] proposition I'mUnprovable does not exist in

foundations.

Do you think that you can derive a contradiction in

foundations by utilizing TheoremUse?


By the way, your posts are coming out formatted in a way that makes them fairly hard to read...


I don't know that I can derive it in the system you described which you are calling "foundations", because I'm not sure that it is sufficiently clearly defined to, uh, clearly express a particular formal system.

In other systems, yes, I can prove a contradiction using ByTheoremUse . Because of Lob's theorem (which I can prove).

(examples of issues in the descriptions you give of the system you attempt to define:

After introducing the standard notation that x:T is to be interpreted as x is of type T, you later go on to talk about "x:Term<T>" for some type T. What this is supposed to mean is very unclear. If what you mean by it is "x is a term of type T", then, in the ordinary sense, this would be the same thing as saying x:T . But if this is all you meant, presumably you would have just written that. So it seems that you mean to have that there are some cases where "x is of type T" and "x is a term of type T" mean different things. In this case, you should clarify what you are using "term" to mean, because it does not seem to be the usual use of the term.

You also say x2^{x1}:Term<t2^{t1}> , when t2^{t1} has been defined to be the type of functions from the type t1 to the type t2. What in the world is this supposed to be? Given a term of type t1 and a term of type t2, what function from t1 to t2 is this supposed to represent? The only one which this would guarantee to exist would be the constant function that sends everything to the given term of type t2 (so, sends everything to x2). But in this case, why require a term of type t1? The only point to that that I can see, would be to guarantee that the type t1 is nonempty, if for some reason you didn't want to allow functions with empty domain. What is the purpose of this?

In your system which is intended to be a foundation of things, you seem to implicitly assume that there is some parameterized type called 1to1, which you don't define. Now, presumably you mean for this to be the type of bijections from the type/set which is the first argument to the type/set which is the second argument, and of course such a thing should exist, but, seeing as you are laying out an attempt at a foundational system, you really ought to actually define it. Either it should be given a definition within the system, or, if it is meant to be an irreducible part of the system, the rules of inference around it should be given.

You do the same thing with "TermWithNoFreeVariables". This is mixing up different levels of description. You never describe what String<TermWithNoFreeVariables<t>> is supposed to mean.

If the text colors you gave to parts of it was supposed to mean anything, you don't say what it is that the colors mean. As is, they are simply distracting.

You also assert that in constructive type theory uses the assumption that something is a proposition of a theory only if it is a theorem of the theory? This is false. It is difficult to understand the confusion which would lead to such a conclusion. You also assert that constructive type theories have, instead of a type of all functions from t_1 to t_2, instead have a type of computable functions from t_1 to t_2. There are a couple things you could have meant by this, but the interpretation that seems probably the most straightforwards interpretation is false I think? Just because you can't construct a function doesn't mean that it isn't in the function type in question, and anything which accepts that function type as an argument, must be able to handle any such function. Well, this may depend somewhat on the specific system in use, but in all the ones I've dealt with, this doesn't seem to apply.

You also assert that theorems in your system are not computationally enumerable due to (something) being uncountable, and that simultaneously proof checking is decidable in your system. This is obviously false.

If proof checking is decidable for a given system, then there is a computational procedure which, for any finite string of text, will decide within finite time whether it is a valid proof within the system. (This is what it means for proof checking to be decidable.) The set of finite strings (over a given alphabet) is computationally enumerable. Obviously. Simply iterate over the finite strings, and for each of them, run the procedure to evaluate whether it is a valid proof.

This obviously produces an enumeration of all valid proofs of the system.

Unless, perhaps, you are saying that there is no decidable procedure which, given a valid proof, determines the statement that it is a proof of? If so, that would be stupid.

I suppose if I define some countable set which is not computationally enumerable, and call the elements of that set "theorems", and then define some uncomputable surjective function from the integers to this set, and call the integers "proofs", then I could say that for each "theorem" there is a proof, and that deciding whether an input integer is a valid proof is decidable (as the answer would always be yes), but that the theorems aren't computationally enumerable, but that would be stupid.

A formal system can, of course, have uncountable models. (Well, modulo metaphysical disputes)

But you talk about having "uncountable proof checkers" because of proof checkers not being restricted to strings? Among an uncountable set, not all elements of it can be named using any finite name. The set of possible inputs to a computer for which the computer halts within finite time, up to considering equivalent those inputs which only differ in parts that computer does not check/is-not-influenced-by, is countable.

Or, if you want to appeal to supertask style hypercomputation, you should say so explicitly, probably in the abstract.

If something isn't representable as a string, I would say it isn't a proof, at least in the default sense of the term.


PS. The reason that ¬I’mUnprovable ⇒ ⊥ is also ByTheoremUse {⊢∀[Φ] (⊢Φ)⇒Φ}.


I'm having trouble understanding whether you are

(1) a troll hiding behind Carl Hewitts name using a veil of CS lingo that's just complex enough to appear legit but is actually nonsense, or

(2) actually Carl Hewitt losing his mind with a bunch of nonsense, or

(3) actually Carl Hewitt who is on to something big and the world has just not caught up with you.

To convince us of (3), have your theories been discussed at related conferences and what does the community make of them?


Carl has a long history of twisting truth and sources.

For example https://www.theguardian.com/technology/2007/dec/09/wikipedia...

Or the discussion page of the wikipedia page about him: https://en.wikipedia.org/wiki/Talk:Carl_Hewitt

Constantly using "Prof" to prefix his name speaks for itself too.

(In other words, it's (2), but that's not a recent development.)


"Avoid unrelated controversies and generic tangents."

https://news.ycombinator.com/newsguidelines.html


Unfortunately, Wikipedia has a long history of libeling people :-(

It would be great if HN could stick to substantive discussions

and not degrade into personal attacks.


Having skimmed through the paper accessible at the end of the first link, I am leaning towards (1). It contains a barrage of eye-assaulting notation, lots of digressions and repetition, and basically no formal substance (which, for a logic/computation paper, is bizarre).


His WP talk page is a shit-show. I'm leaning towards 2.

https://en.wikipedia.org/wiki/Talk:Carl_Hewitt

https://en.wikipedia.org/wiki/Talk%3ACarl_Hewitt%2FArchive_2

(to disclose, I am not qualified to actually judge the arguments)


As is common with many social media platforms, Wikipedia has

huge problems with accountability :-(

It's not worthwhile to go into the whole sordid story on HN.


Sorry that mathematical notation is causing you problems.

How do you see the article lacking in formal substance?


> Sorry that mathematical notation is causing you problems

For somebody who elsewhere in this thread literally asked HN discussion participants to "not degrade into personal attacks" this was an unbelievably arrogant remark.

What about giving us some summary on how the overall community of researchers in the theory of computation area are thinking of your breakthrough insights? I'm sure there are some CS Ph.D.s in here that are interested in some context.


Sorry that you took the remark as arrogant.

Mathematical notation can be difficult to grasp :-(

It would be great if other people joined the discussion!


I suggest you bring the discussion to the appropriate academic forums like conferences and workshops. Doing this via wikipedia and hacker news instead makes it look like your academic peers have decided it's not worth their time so that you are coming here instead.

Once the academic community with researchers in the same field accepts your breakthrough results, it will be easier to convince us that it's legit. And it will come here all by itself, without you having to do active promotion.


Of course I participate in academic and research forums.

For example, see the following video:

https://www.youtube.com/watch?v=PJ4X0l2298k

Participating in forums such as Hacker News is also important :-)


"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. "

https://news.ycombinator.com/newsguidelines.html

(1) does not assume good faith. (2) is not the strongest possible interpretation of what Hewitt said. Listing them here is flamebait and reflects poorly on the community. Shame.

Edit: Added guidelines link.


Dear ganafagol,

Your personal attacks should not be part of Hacker News discussions.

Please confine your remarks to substantive issues instead.


Um...? I was hoping to get some clarification that your account is legit and some references to opinions from the broader research community for your big claims. I came here with an open mind and would have liked to learn something since CS is my passion for decades now.

Unfortunately, by reading your comment history, I learned that you've been self promoting here for quite a while and have always been ignoring questions about references to other people's thoughts on your work. I also learned that there is a wikipedia talk page and apparently the same pattern happened there 5-6 years ago, where you were asked about citations and could only come with self-citation and claims of harrassment. And after sockpuppet edits you got banned there. That's sad, for everybody.

It would be so easy for you to just throw a handful of citations into the mix and let the research speak for itself! You want to go down in CS history as somebody having discovered something big, right? I'd love to see that! Then I could tell my grandkids that I was sitting in the front row! But you won't convince anybody by only doing self-promotion and complaining about "personal attacks". Scientific progress works by convincing the scientific community, and for that you need to engage with it positively.


Dear Ganafagol,

It's great that CS has been your passion for a long time!

However, personal attacks do not belong on Hacker News.

Unfortunately, Wikipedia is a well-known site for libeling

people :-( Wikipedia libels should not be repeated on Hacker News!

Hacker News should be a welcoming forum. Participants on

Hacker News should not be subjected to personal attacks.

Of course, I engage extensively in the scientific community.

Before the covid-19 pandemic, I gave lectures at University

of Cambridge, Oxford University, Edinburgh University,

Imperial University, Stanford, University of Newcastle, etc.

Also, I give seminars at addition forums. For example, see the following:

https://www.youtube.com/watch?v=PJ4X0l2298k

You can read the articles in the extensive references in articles linked here:

https://professorhewitt.blogspot.com/2019/07/updated-article...

Do you have anything constructive to contribute to the discussion?


PS. Also, do you have any particular questions about the

subject matter?


Dear Ganafagol,

Science does not work exactly as you have described.

References to scientific papers come afterward.

Sometimes it has taken considerable time for references to appear.

Also, references to other work in a scientific article

appear in context of how the references relate to

the subject of the article.

Science is not based on authorities per se. Instead,

the community works things out over time. Sometimes a consensus

will form. Aristotelian physics lasted for a long time.

However, consensus may dissolve or be replaced

with a revised consensus. For example, Newtonian physics was

revised in relativistic physics.


That book doesn't touch any nondeterministic behavior at all. So, the nondeterministic extension of the Church-Turing Thesis is absolutely irrelevant here. Your other cited "developments" seem senseful only in your personal universe.


Yeah - its a long time since I studied this kind of stuff but I'm pretty sure the Church–Turing thesis doesn't say anything about non-determinism or not - being more relevant to complexity than computability?


The original thesis concerns only computable functions, which are deterministic by definition. It doesn't even consider their complexity. There are extended variations of the thesis, though. The complexity variation means that any computable function in one model of computation can have only a polynomial slowdown factor in comparison with another model of computation. It's believed to be false since the raise of quantum computations.


Nondeterministic Turing Machines were introduced very early

and became the basis for the Church/Turing Thesis.

An important complexity result is that in important

applications, a digital implementation can be hundreds of

times faster that a parallel lambda expression.

Consequently, the lambda calculus cannot be a practical

foundation for computing.

See the following for more information:

"An Actor Application Can Be Hundreds of Times Faster Than a Parallel Lambda Expression"

https://professorhewitt.blogspot.com/2021/03/an-actor-applic...


The Church/Turing Thesis is that all computation can be

performed by a nondeterministic Turing Machine.

The Thesis is false because there are digital computations

that cannot be performed by a nondeterministic Turing Machine.

See the following article on how to formalize all of digital

computation:

"Physical Indeterminacy in Digital Computation"

https://papers.ssrn.com/abstract=3459566


Nondeterministic Turing Machines are the standard for the Church/Turing Thesis.

Do you have anything substantive to contribute to the discussion?


I am disappointed at several comments on this thread. Professor Hewitt, I personally think you've enriched the community by putting these claims here. They've led me to read more about several things, which I will list here for anyone else who might be looking for related topics: The difference between indeterminism and nondeterminism, bounded and unbounded nondeterminism, the actor model of computation, local arbitration versus global consensus, and state machines versus configuration-based computation.

You've responded to the ad hominem attacks with class. I hope you do not take them as a reflection of the HN community at large; please keep coming back and posting (provocatively) as you have done, because it induces learning.


Thank you very much for your warm words.

Of course, I sometimes make mistakes :-(

However, I do try to learn from them!


PS. Also thanks for picking up on important topics including

"indeterminism and nondeterminism, bounded and unbounded

nondeterminism, the Actor model of computation, local

arbitration versus global consensus, and state machines

versus configuration-based computation."


Ignore the downvotes: there is value in reading and attempting to understand some of this material, despite the inflammatory way the claims are made. Carl Hewitt is presumably in his 70s, so I'll forgive him for being an old man who probably doesn't give a shit what people think.

In particular, I find the discussion of "paradox" attacks to be quite illuminating. Bertrand Russell attempted to remove self-referential and self-applicable paradoxes through the embellishments of types and orders. Hewitt argues that Gödel basically "cheated" in his proofs because the Diagonalization Lemma violated restrictions on orders.

I'm not qualified to evaluate that argument on the basis of mathematical reasoning, but I do believe it points to a mismatch between the way computation is modeled in the abstract and the way it happens in reality. In the purely abstract world of natural numbers, there's really only one kind of thing: the natural number. Statements about natural numbers can be represented by natural numbers, statements about statements about natural numbers can also be represented by natural numbers, etc. While a given natural number can be parsed to compute if it is a valid encoding of a proposition of a certain order, it is always and forever a natural number and nothing else.

However, I'm not entirely convinced that this captures the nature of computation in reality. When computation is embodied in a physical system, is it possible that the output of some computation can itself manifest computational properties that are not mere compositions of the lower level computational model? Where the interactions between the "higher level" objects simply follow a different set of rules and thus form the foundation of a new formalism?

The notion of types seems to be a way to capture this, by introducing the possibility of distinguishing a "thing" from its abstract representation or description. If everything exists only within the world of the abstract representation, then a thing is indeed no different from its description, as any manipulation of the thing is equivalent to some manipulation of its description. But when the thing is, in fact, a physical object, it is clear that construction of the thing can fundamentally alter the model of computation. Why is this?

I suspect that it is because there are no "pure" or "inert" abstractions in the real world; everything is in fact performing computation all the time. So physical constructions are not just compositions of objects, but also compositions of the computational capabilities of those objects. I realize this probably sounds like navel-gazing at this point, but sometimes that can be a useful activity.


Thanks for your interesting ideas!

My main purpose is to educate. So I am pleased that you got something out of the article :-)


Why is this comment being downvoted?


I assume because these are huge claims however they support these claims with their own blog site rather than a peer reviewed journal etc



SSRN is not a peer reviewed journal. It's essentially just a preprint server where anybody can dump whatever they please. So it's more like the personal blog that the parent comment referred to than a peer reviewed journal they were asking you to cite.

Please cite peer reviewed works. As an accomplished academic of 70+ years I'm assuming you are very familiar with that process.


Dear Ganafagol:

Please do not denigrate SSRN, which is used by many

prominent researchers to publish their articles.

Eventually, many of the articles are republished elsewhere.

However, waiting years for trees to be cut down should not

be allowed to slow down scientific progress!

You are incorrect that SSRN is a "private blog".


> You are incorrect that SSRN is a "private blog".

And you are incorrect in claiming that I wrote that.


Dear Ganafagol:

I mistyped :-( Instead, you insinuated that SSRN is a

"personal blog." Also, you are incorrect that "anybody can dump

anything that they please" into SSRN.

If you wait long enough, each article will be republished

elsewhere :-) However, you may have to wait years :-(

In the meantime, more articles will be published in SSRN

for which you will have to wait even longer to be republished!


Because it is wrong. The comment is claiming that the comment author has overturned well understood results, when they have not done so.


You are welcome to join the debate!

Scientific development proceeds building on previous results

as is the case here as shown by references in articles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: