>How does analysis require the existence of uncomputable reals?
I don't know what the other person was referring to, but from many important theorems of analysis you can prove the existence of uncomputable reals. For example, from the theorem that a continuous function on a bounded interval is uniformly continuous you can prove that there exist uncomputable reals. If we think that this theorem and many more like it are necessary for analysis, we must conclude that analysis requires the existence of uncomputable reals.
The above fact comes from the area of mathematical logic known as reverse mathematics. The idea is in the name: rather than proving theorems from axioms, we go in reverse. We take some theorem and we see what axioms we can derive from it (using a weak background theory). This gives us a sense of the logical strength of a theorem; the stronger the axioms we can prove from it the stronger the theorem.
An amazing fact is that most classical theorems of mathematics have their logical strength captured by one of five sets of axioms. The weakest are those which are outright provable from the weak background theory (briefly, the background theory, called RCA_0, says that you're allowed to do computable processes). The next is the theory WKL_0, formed from RCA_0 by adding an additional axiom, which goes by the name weak Kőnig's lemma. This axiom states that any infinite binary tree has a cofinal branch (it's a lemma in 'ordinary' mathematics but in reverse mathematics gets treated as an axiom). This is the theory which captures the logical strength of the theorem I mentioned in the first paragraph. From weak Kőnig's lemma we can prove the existence of uncomputable reals. We can construct a computable infinite binary tree so that any branch through it must be uncomputable. From such a branch one can build an uncomputable real, say by insisting that the nth bit in its binary expansion is 1 if and only if the branch went left at level n.
(The easiest way to build such a tree is to go through the halting problem. The basic idea is to construct a tree of attempts to solve the halting problem. It turns out this can be done in a computable way, and any branch through the tree will allow us to solve the halting problem and thus must be uncomputable. The key fact used is that while there's no computable procedure to check whether a Turing machine halts at some time, you just want to know whether it halts in n steps (for fixed n), then you can check that with a computer. The program is easy: simply simulate running the Turing machine for n steps and see whether it stops at some point.
To build our tree: list the Turing machine programs as p_0, p_1, p_2, ... (This listing can be done in a computable way.) We will associate level n+1 on the tree with p_n. Start with a single node for the root of the tree. Then, to build the (n+1)-st level of the tree, look at each node t on the nth level. Check, in a computable way, whether p_0, p_1, ..., p_(n-1) halt within n steps. If none of them halt, then t has a left and right child at the next level. If p_k does halt within n steps, then look below t to see whether you took the left or right path at level k to get to the node t. If you took the right path, then t gets no child nodes. Otherwise, if you took the left path whenever p_k halts within n steps, then t gets two child nodes. Intuitively speaking, at stage n we keep our options open and don't decide whether p_n halts. But if we later see that it does halt, then we retroactively fix any mistake by not going any further along paths where we guessed wrongly before.
For example, it could be that p_0 halts but it takes 1000 steps for this to happen. So while building the tree when you get to level 1000 you'll suddenly stop building any further the entire right half of the tree, since you finally learned that p_0 halts and that you wanted to go left at the root node all along.
This process for generating the tree is computable, so by definition it's a computable tree. It's infinite, because you can keep building the tree upward if you happened to have guessed correctly whether each Turing machine so far halts. However, no branch through this tree can be computable. This is because a branch goes right at level n if and only if p_n didn't halt within k steps for any k. That is, the branch goes right at level n if and only if p_n doesn't halt, so from a branch we can decide the halting problem.)
----
Probably the best reference for reverse mathematics is Stephen Simpson's book Subsystems of Second-order Arithmetic. The first chapter is freely available on his website. It nicely lays out the philosophy behind the project and the major results while deferring (most of) the technical details to the rest of the book. http://www.personal.psu.edu/t20/sosoa/chapter1.pdf
I don't know what the other person was referring to, but from many important theorems of analysis you can prove the existence of uncomputable reals. For example, from the theorem that a continuous function on a bounded interval is uniformly continuous you can prove that there exist uncomputable reals. If we think that this theorem and many more like it are necessary for analysis, we must conclude that analysis requires the existence of uncomputable reals.
The above fact comes from the area of mathematical logic known as reverse mathematics. The idea is in the name: rather than proving theorems from axioms, we go in reverse. We take some theorem and we see what axioms we can derive from it (using a weak background theory). This gives us a sense of the logical strength of a theorem; the stronger the axioms we can prove from it the stronger the theorem.
An amazing fact is that most classical theorems of mathematics have their logical strength captured by one of five sets of axioms. The weakest are those which are outright provable from the weak background theory (briefly, the background theory, called RCA_0, says that you're allowed to do computable processes). The next is the theory WKL_0, formed from RCA_0 by adding an additional axiom, which goes by the name weak Kőnig's lemma. This axiom states that any infinite binary tree has a cofinal branch (it's a lemma in 'ordinary' mathematics but in reverse mathematics gets treated as an axiom). This is the theory which captures the logical strength of the theorem I mentioned in the first paragraph. From weak Kőnig's lemma we can prove the existence of uncomputable reals. We can construct a computable infinite binary tree so that any branch through it must be uncomputable. From such a branch one can build an uncomputable real, say by insisting that the nth bit in its binary expansion is 1 if and only if the branch went left at level n.
(The easiest way to build such a tree is to go through the halting problem. The basic idea is to construct a tree of attempts to solve the halting problem. It turns out this can be done in a computable way, and any branch through the tree will allow us to solve the halting problem and thus must be uncomputable. The key fact used is that while there's no computable procedure to check whether a Turing machine halts at some time, you just want to know whether it halts in n steps (for fixed n), then you can check that with a computer. The program is easy: simply simulate running the Turing machine for n steps and see whether it stops at some point.
To build our tree: list the Turing machine programs as p_0, p_1, p_2, ... (This listing can be done in a computable way.) We will associate level n+1 on the tree with p_n. Start with a single node for the root of the tree. Then, to build the (n+1)-st level of the tree, look at each node t on the nth level. Check, in a computable way, whether p_0, p_1, ..., p_(n-1) halt within n steps. If none of them halt, then t has a left and right child at the next level. If p_k does halt within n steps, then look below t to see whether you took the left or right path at level k to get to the node t. If you took the right path, then t gets no child nodes. Otherwise, if you took the left path whenever p_k halts within n steps, then t gets two child nodes. Intuitively speaking, at stage n we keep our options open and don't decide whether p_n halts. But if we later see that it does halt, then we retroactively fix any mistake by not going any further along paths where we guessed wrongly before.
For example, it could be that p_0 halts but it takes 1000 steps for this to happen. So while building the tree when you get to level 1000 you'll suddenly stop building any further the entire right half of the tree, since you finally learned that p_0 halts and that you wanted to go left at the root node all along.
This process for generating the tree is computable, so by definition it's a computable tree. It's infinite, because you can keep building the tree upward if you happened to have guessed correctly whether each Turing machine so far halts. However, no branch through this tree can be computable. This is because a branch goes right at level n if and only if p_n didn't halt within k steps for any k. That is, the branch goes right at level n if and only if p_n doesn't halt, so from a branch we can decide the halting problem.)
----
Probably the best reference for reverse mathematics is Stephen Simpson's book Subsystems of Second-order Arithmetic. The first chapter is freely available on his website. It nicely lays out the philosophy behind the project and the major results while deferring (most of) the technical details to the rest of the book. http://www.personal.psu.edu/t20/sosoa/chapter1.pdf
Besides that, the wikipedia page for reverse mathematics isn't awful, but neither is it very good. https://en.wikipedia.org/wiki/Reverse_mathematics