The proof of zero-knowledgeness using the time machine still doesn't make sense to me.
What if the Verifier says, "the information extracted will only be deemed 'useful' if I can be satisfied with a reasonable certainty that google has indeed found a solution."
In other words... yeah we could trick my Verifier with a time machine but forget all of the information extracted on a cheat run. Can we indeed not gather information only from the successful runs?
I think the time machine example is circular logic. Consider a scheme where you uncover three adjacent nodes instead of two. We know that this does leak information, since it will tell you whether or not two nonadjacent nodes share a color.
Applying the time machine to this, you can't fake having the solution with a time machine because after many runs the verifier will know which nodes share colors, and will find an inconsistency (unless you had the correct solution after all).
In effect, the time machine doesn't work because you cannot simulate a given run of the process without leaking the same information that a correct solution would leak unless you actually know a correct solution.
-----
It's pretty easy to determine that the regular two-node scheme doesn't leak data. In each run, the verifier is just given the information that the colors of two adjacent nodes differs. Assuming a correct solution, this isn't new information. Because of the permutation the actual colors don't matter and can't be correlated between runs. So the only information that "leaks" is the fact that the coloring is in fact valid, which is all we wanted.
What if the Verifier says, "the information extracted will only be deemed 'useful' if I can be satisfied with a reasonable certainty that google has indeed found a solution."
In other words... yeah we could trick my Verifier with a time machine but forget all of the information extracted on a cheat run. Can we indeed not gather information only from the successful runs?